The Ghosts Of Twitter Past, Present and Future

For the world’s biggest soapbox, effective course-correction can’t come soon enough.

Faruk Ateş
Published in
12 min readOct 6, 2016

--

A little over ten years ago, a team of scrappy developers created a product called Twitter (“Twttr” at the time). It was a platform for social microblogging, and while it long struggled to define itself, people quickly warmed up to the service. In its early days Twitter fully embraced the user and developer community: early adopters defined many concepts and mechanics now commonplace in the product, developers built third-party features that were acquired and became part of Twitter itself, and users were coming up with all sorts of creative ways to make Twitter interesting and artful.

Today, Twitter is a platform for the human hive mind, the pulse of our collective conscious. It is a media service and publishing platform where everyone, from celebrities to people-entities like brands, can (theoretically) converse in public. Note that I said a platform; I previously expected Twitter to become the platform to tap into the state of the global human mind, but I believe they have taken too many wrong turns to be adequately representative of that.

The Twitter we have today is not the Twitter we could’ve had, and clearly its executive leadership is starting to realize that the latter one would’ve been a lot better — and more sustainable. The current talk surrounding a Twitter takeover bid from Salesforce, Google, or Disney, however much or little may come of it in the end, reveals just how badly Twitter is struggling to manage itself.

It’s a case study that entrepreneurs, product designers, and sociologists will likely research for many years to come — regardless of what happens next. The interesting thing is not that Twitter, after years of struggling to grow its active users, has finally started listening more closely to users and evolving the product to address their concerns. No, the interesting thing is how this giant global soap box failed to take part in defining what its own representative culture would be, until it was very, very late. That is the takeaway designers, companies and organizations should focus on.

The Road To Argument Tennis

Twitter started out being a great conversational tool among friends, like a favorite local bar you’d frequent, except online. Eventually, Twitter’s growth made it feel like you were tapping into a wider consciousness of humanity; a place where infinite global discussions were taking place in the most democratic, egalitarian way. Everyone could talk to everyone! What could possibly be wrong with that?

Twitter’s product designers aimed to create a tool for people all around the world to have conversations with. While succeeding in many impressive ways, they also failed in a couple of big ones. For one, they failed to take into account that a lot of human conversation involves emotionally-charged disagreements, which get heated and can escalate. Then they failed to notice that their product was leading to argument tennis.

Argument tennis is when you sling points and counter-points at your opponent without taking the time to add the oft-requisite nuance to each. It’s when you start measuring a debate in terms of who made the most successful points that your opponent didn’t volley back. It mistakes discussing serious topics as being a sport, rather than an exploration into uncomfortable territory in search of greater, mutual understanding.

Two important things: first, argument tennis is not an intrinsic element of human nature or human behavior. Arguing may be, but engaging in argument tennis is not. Twitter started by being SMS-friendly, settling on maximizing tweets at 140 characters each. While this product design decision encouraged brevity and creativity, it heavily discouraged any semblance of nuance — a critical element to having a useful, healthy debate. A not unreasonable point of view is that one should perhaps refrain from trying to have debates on a nuance-starved platform like Twitter, but that argument does not stop people, and never will. The product must be designed to solve the problem it created.

The second thing is that when they started hiding @-replies to people you don’t follow, they stripped the user experience of a vital ingredient for civility: peer transparency. The tone of discourse changed much for the worse over time, following that new behavior of the timeline. Before the rollout, all your friends would see if you behaved like a jerk to someone; after the rollout that was no longer the case. It removed the natural consequences of bad behavior, thereby encouraging people to reap the benefits of such bad behavior much more frequently.

There is no one single feature or product design decision that caused these problems. But there were many that contributed to a slow toxification of the platform, and only few that helped to counteract that.

This situation didn’t happen intentionally: Twitter got blindsided on this problem, partly because of its leaders’ and employees’ predominantly white and male privileges. “Having privileges” is not a bad thing, except in that it makes it harder for you to understand how life experiences are different (and more difficult) for people who lack those privileges. In other words, having privileges gives you blind spots to many real people’s very real problems.

The blind spot of unchecked free speech

These particular blind spots prevented Twitter’s leadership from recognizing the severity of the abuse and harassment problem for a really long time. It’s a problem that had become a pervasive and persistent background radiation to the everyday user experience, but disproportionally affected marginalized groups. The company’s leaders never experienced the same kind of constant torrent of vile, abusive, and specifically bigoted harassment that women, people of color, Muslims, Jews, transgender people, and many other marginalized groups were experiencing on a daily basis.

As Charlie Warzel heard from numerous former employees, Twitter’s homogenous white male leadership was “often tone-deaf to the concern of users in the outside world, meaning women and people of color.” This neglect was never out of malice, or even indifference. But it was a failure to recognize how toxic the Twitter experience had become for many users, and those users were gradually moving their time, attention, and account activity elsewhere.

When Twitter started designing their product to encourage viral sharing of content, it opened the doors to all types of content to be shared that way, be it good or bad. The shocking ISIS video was a high-profile culmination of this, but many other pernicious (if less obvious) abuses of Twitter’s virality abounded. Reports of abuse, however, fell overwhelmingly on deaf ears as Twitter maintained a position of “neutrality”, considering verbal threats and harassment as acceptable conditions of upholding a (poorly-considered) version of freedom of speech for its users.

That version of freedom of speech is not universal, but specific to Twitter. Their former head of news, Vivian Schiller, described Twitter as “the free speech wing of the free speech party”, and added “that’s not a slogan, that’s deeply, deeply embedded in the DNA of the company.”

There is a glaring problem with this: the way that Twitter sees and thinks about free speech is not actually how free speech works. If you give a certain type of people unlimited free speech, they will use that to viciously suppress the free speech of entire groups of others. And the group that does the suppressing is always the group who currently enjoys the most free speech, whereas the groups they suppress are always the groups whose free speech is already besieged from many sides.

There is an intrinsic problem with believing that freedom of speech should be completely unrestricted: it elevates oppressive bigotry to the same level as the mere and uncontroversial existence of marginalized people. As commenter AllyF wrote, in a New Statesmen discussion thread:

“What [people] fail to understand is that the use of hate speech, threats and bullying to terrify and intimidate people into silence or away from certain topics is a far bigger threat to free speech than any legal sanction. Imagine this is not the internet but a public square. One woman stands on a soapbox and expresses an idea. She is instantly surrounded by an army of 5,000 angry people yelling the worst kind of abuse at her in an attempt to shut her up. Yes, there’s a free speech issue there. But not the one you think.”

Freedom of speech is not about allowing everything to be said, but about allowing everyone to have a voice. But if one group’s voice is being used to prevent other groups from having a voice, then the first group is the threat to free speech we must manage. There are things people say we can and must absolutely object to, if we care about freedom of speech for all people. We don’t have to (and, in my view, ideally wouldn’t) ban people altogether, but we should ban their objectionable (and sometimes downright illegal) actions and educate them on why those actions are not permitted. (Important note: vocalizing your opinion on a public forum is an action, not just having an opinion.)

As a company, Twitter never seemed courageous enough to play such an arbiter over its users’ content, despite having full legal ability to do so. It attempted to be neutral, not realizing that there are no neutral stances to take when discrimination occurs around you. However, there exist biases inherent to any product made by humans, and especially one that facilitates communication between them. These biases skew towards those who are least likely to suffer from any kind of discrimination. To make a product or platform truly egalitarian, it must counteract those biases by way of intentional and conscious product design efforts, which include, but are not limited to, policy and business decisions.

The reluctance of the company’s leadership to be involved in addressing these problems head on, as they were still emerging, meant that it came down to the loudest — and often most aggressive — voices on the platform to drive the culture and norms that would ultimately represent what Twitter was to people. Not just to people on the platform itself, but to the outside world at large. Twitter, in choosing inaction for the sake of “free speech”, gave the reigns of its representation to the very users it was receiving the most (correct) abuse reports over. It gave up control over what its platform was becoming known for by refusing to play an active role in deciding that.

All of this sits on an axis along and intersecting with many other axes, such as Twitter’s marketing efforts, or metrics suggesting that the percentage of complaints about abuse on Twitter was tiny in comparison to all tweets. This no doubt played a role in convincing upper echelons that the abuse and harassment concerns were not as significant to the company or platform as a whole, compared to others, but for the marginalized groups most affected by it, harassment was rapidly becoming the only concern they would discuss when talking about Twitter in other spaces. Having more diverse and empathetic executive leaders would have let them see through the noise of comparison reports and metrics, and better hear what the people in need of their support and leadership were saying.

The combination of these design decisions and Twitter’s policy of “neutrality” that was so heavily unaware of biases, meant that the company had placed an onus of restraint onto its users. Restraint to never devolve into the pithy argument tennis that its product unintentionally encouraged, even when faced with abuse or hate speech aimed at silencing them. Restraint to not abuse its design features, even if unintentional, to swarm and sea-lion people they disagreed with. Restraint not to use the product as intended when the absence of a recipient’s context masked the fact that they were already being inundated with messages much like the one you were about to send them.

While those outcomes were undoubtedly unintentional, Twitter absolved itself from the responsibility to fix the design problems in its product, placing that responsibility with their users instead. And during the period that this problem became so severe the United Nations became involved, Twitter also started leaving its developer community out in the cold. The result is that some users and developers started spending considerable time trying to reach out to Twitter with feature ideas and even comprehensive proposals to help address (some of) these problems, having been cut off from the option of making an abuse-preventing Twitter experience themselves. Most people simply reduced their activity on Twitter, or left the platform altogether.

The Twitter that can be

When original co-founder Jack Dorsey returned to (re)take the vacant CEO position for Twitter, the company had publicly acknowledged and even started addressing the problem. However, it’s hard to course-correct a ship of Twitter’s size when it’s gone so far in the wrong direction for so long. As a result, for quite some time Twitter’s public statements about combating the toxicity of its platform were vastly ahead of any pragmatic feature and policy changes that users were seeing.

The lack of transparency into what the company was actually up to, and what changes it was making internally, left marginalized users wondering just how serious Twitter was in effecting the dramatic shift they considered necessary for the company to adequately address their concerns about safety and usability.

There is the long-rumored, perhaps even expected new feature of a long-form text attachment that would allow users to write more nuanced views. This feature, if real and implemented well, would go a long way towards reducing or even eliminating the Argument Tennis experience of Twitter. Its latest move, to allow anyone to become Verified, is more an illusion of abuse prevention, as getting verified opens the door to a whole new swath of attack vectors for your abusers and harassers to make use of.

Twitter has broadened its scope and utility to become a media and publishing company more than a social network, even if it is still that. But shifting from social network to media and publishing doesn’t change the underlying problems surrounding how users interact with each other.

The argument that “it’s the internet, vicious hostility should be expected” is as flawed as it is outrageous. Facebook doesn’t have these problems. MetaFilter doesn’t have these problems. Tons of smaller free-for-all communities online don’t have these problems. And tools already exist to give one ideas for how to combat or reduce these problems, such as Civil Comments and ReThink. The latter, especially, raises the interesting idea of an abusive user being presented with a dialog, when posting a clearly abusive message, informing them that their targeted recipient won’t ever see it and might even automatically mute them, all without conscious actions on the target’s part.

Abuse loses its “appeal” very fast when it loses the satisfaction of being heard, and the number one argument abusers make is that tools like block-chains are “censorship” — because abusers are generally entitled people who mistake “freedom of speech” as “a right to forcibly be heard by whomever I choose to speak to”.

All of this is not a distraction of what Twitter can be, or a sideshow to it. Twitter is a heron covered in the slick grime of a catastrophic oil spill, and while it may still be able to take flight, it requires a herculean cleaning effort, important product and policy improvements, and courage to be responsible over making the “tough calls” of simply enforcing its already fairly clear Terms of Service, before it can become a beautiful bird once more.

Courage of this kind may be hard to find in Silicon Valley, but when it comes to creating a product you can stand for, it is vital. There’s a great line in Moneyball that captures this well:

“It’s a problem you think we need to explain ourselves. Don’t. To anyone.”

Twitter can be the pulse of the people, a representation of the ongoing connection between all of mankind. It can be our source of entertainment, as well as a democratic tool empowering people to have their voice. A communication platform for the human consciousness.

But for Twitter to become all that, it needs people on the platform who disagree with each other, and do so in a civil and constructive manner. The burden of policing and tailoring the user experience to something civil and enjoyable should not rest so completely and utterly on users’ shoulders. User-created solutions like shared block lists are subpar at best, and mostly serve to further erode the discussion side of Twitter that allows us to benefit from new perspectives.

Shouting down women and people of color to the point of exclusion from the conversation altogether is neither a new perspective, nor one worth protecting. Twitter doesn’t need to protect all speech, because hate speech, besides being illegal in most countries already, is not worthy of protection in the first place. But even abusive speech that isn’t technically hate speech does not contribute to a valuable platform of discussion and debate, nor to an entertainment media platform, nor to a worthwhile publishing platform. Such speech only destroys advertisements’ click rates, tarnishes the reputation of a platform, and pushes decent users away from the product.

Whatever type of platform Twitter may aspire to be, if not all of the above, achieving it hinges on the company’s ability to clear the house of the toxic cobwebs they’ve allowed to cover every nook and cranny.

You are reading Product Matters: a fresh perspective on product design, written and curated by Faruk Ateş. Follow on Medium, Twitter and Slack for updates.

--

--

Love First Person, writer, technologist, designer. Playing the Game of Love because the Power one is boring.