Why Platforms Fail at Addressing Eternal September

Eternal September is a reoccurring phenomenon – one which we have now seen alter or affect nearly every successful digital community, since the early 90's. Still, thought many have acknowledged the phenomenon (and several organizations have even dedicated serious resources to their attempts to circumvent it), surprisingly little progress has been made towards actually addressing it.
In try last post, I described the phenomenon of Eternal September, and defined it as a challenge which remains unsolved – in spite of all efforts to confront its effects.
First, let's examine the efforts which have been made, previously.

Creating a barrier to entry is the first and most obvious approach, and is arguably the most successful thus far. Purposefully restricting the access of new users to a social platform (as Facebook did, initially) helps throttle excessive growth, and allow the existing culture to organically inculcate new users. Still, it does not entirely solve the challenges of Eternal September.

Moderation (a system in which certain administrative users, either paid or volunteer, maintain veto power over posted content) is another approach which has been leveraged previously, with varying results. While sites such as YouTube, Twitter, and Facebook employ massive teams of behind-the-scenes moderators, focused on filtering graphic contentfrom the view of users, networks which have relied on moderation to regulate the quality of posted content have seen limited success. Leaving out the discussion of extreme cases of moderator abuse, misuse, or uprising (such as the now-infamous Reddit Revolt) forums which rely on moderation will inevitably see their conversations shaped by the moderators, themselves.

Reputation (or “karma") -based systems have also failed to offer a comprehensive solution. Platforms such as Stack Overflow, StackExchange, and Quora have realized that when users are provided the power to detract from the reputation of others, they will eventually use this power to squelch the voices of those with whom they disagree – regardless of whether or not they give voice to valid perspectives or views.

In short – the use of moderators shifts the tone of the conversation towards the opinions of the moderators themselves, thus changing the conversation – even if this effect is unintentional. Karma-based systems eventually succumb to mob rule – which is not only undesirable, as the conversation tends to lose relevance – but also exploitable, and prone to abuse.

The social platforms which have demonstrated the highest level of success in dealing with the problem of Eternal September have all resorted to the use of a combination of these solutions. Facebook utilizes human moderators, in combination with ranking (“likes" and “reactions"), and even algorithmic intervention. Reddit relies on a system of human moderators, as well as up/down voting – and allows those channels which grow too large to reorganize into new channels, or sub-reddits. All have worked out some form of system which allow users to continue their conversations, without devolving entirely into obscurity.

Still, the real problem is that all of these solutions are strictly reactive. They are incapable of truly addressing the challenges of Eternal September, as they are limited to the current format of the tools they operate within – tools which, by design, are already broken.

While many are placing their faith on the use of AI or complicated algorithmic processes to address the challenge, the technology is still too limited. The use of AI to vet content and assess user interest creates stereotypes which are overly simplistic, inaccurate, and largely inadequate. At best, AIs create social environments which are flat, and uninteresting – and, at worst, they create the type of endless echo chambers which destroy debate, and kill conversations entirely. Which leads us back to our original topic of conversation:

Is the challenge of Eternal September solvable?

The answer is yes – but not within the confines of our current tools.
Digital culture is constantly shifting because human culture is constantly shifting – which means to maintain relevant conversations, we require a human-based platform, which evolves accordingly. A platform which responds to accelerating change fluidly, by promoting a higher diversity of conversations and communities, instead of narrowing them – and then allows those cultures to unfold naturally into their own space.
We've proven that systems based on limiting extremes don't work – and now it is time to take a closer look at what happens when we allow conversations and communities to evolve naturally, and without limitation.