Obviously This Editorial Is Bad…

David Clayman
9 min readAug 15, 2019

Like many of my friends I’m constantly annoyed with the media. They seem to focus on the wrong story, elevate the banal, or fall for obvious traps set for them by dishonest trolls. The primary focus of my ire is the New York Times, a paper that I subscribe to and have read my entire life. While I have a deep appreciation for their excellent and crucial journalism, in many ways they also remain rooted in the past, unable to pivot or adequately transform after a massive failure in reporting the 2016 elections. Nowhere is their crusty “bothsidesism” more infuriating than on the Opinion page which intersperses expert analysis with the yammering of hacks, gossips, and contrarians who are clearly doing it for “the clicks.”

Rather than bombard the paper with useless angry tweets I’ve decided to compose useless angry essays.

“social media” by David Clayman

Some weeks ago Bret Stephens penned a particularly obnoxious editorial in the New York Times that loosely tied millennial sentiment towards Joe Biden to online call-out culture and the persistent stereotype that an entire generation is too touchy. Besides the cherry-picked references and using his platform to settle petty grievances against random twitter accounts, Stephens’ biggest failure is a lack of curiosity and insight into the issue he raises.

How did callout/cancel culture take hold? What catalyzed its growth in the first place? Is it possible that it’s a natural and necessary response to other societal trends? I don’t have all-encompassing answers to these questions, but I think I can provide some context that makes the millennial point of view easier to understand and potentially empathize with.

First, let me provide a bit of my background. I’ve spent my “professional” career enmeshed in tech culture watching and participating in the rise of social media. I put professional in quotes because I work in video games, a field sometimes dismissed as less serious because of its frequently juvenile content. This is a mistake.

Gaming has been a bellwether for every new-media issue that has given the traditional press whiplash. Every fresh nightmare brought upon by rapidly advancing technology has likely been knocking around in the gaming community first. Why does Black Mirror feels so prescient? Its written by a former game critic.

This is also an industry that commands a staggering amount of capital investment and mind share, and has for some time. A couple of weeks ago Chris Hughes, one of the founders of Facebook, wrote an editorial on why that platform should be broken up. He reminisced about the seismic moment when News Corp purchased Myspace for $580 Million, a sum that seemed to cement social media as the next wave of the internet. What’s lost to time is that in the same year Murdoch purchased the company I worked for, IGN Entertainment, for $650 Million. It was nothing more than a loose collection of sites that mostly posted game reviews and strategy guides.

Murdoch wanted access to a captive audience and besides the burgeoning social media market, gamers generated the most traffic. After his purchase he dropped by the office and looked over my shoulder as I photoshopped a cat onto my coworkers’s head. His entourage was unimpressed, but he smiled. I guess he knew something they didn’t.

I later worked as a brand manager for a game publisher, running advertising campaigns that employed direct marketing focused specifically on social media. The traditional media still can’t adequately describe the Cambridge Analytica scandal. When the story broke it sounded to me like a description of a standard targeted marketing campaign. I never engaged in deceitful data collection, but the tools to do so were right there on display within the Facebook ad platform, available to anyone with a credit card.

(There’s a reason Trump Campaign is already spending millions of dollars on Facebook and Google ads for his reelection campaign. The left is only beginning to respond in kind with digital marketing firms focusing specifically on politics)

In these two roles I also had a court side view to the progenitors of the internet’s most toxic online communities. What I learned early in gaming, was that anonymity within an insular online community encourages abhorrent behavior. This was as obvious in 1995 in AOL chatrooms as it was in 2005 on IGN message boards, as it is today on Twitter or Facebook. These spaces only function as intended when users face consequences for misbehavior, when they are moderated.

In the old days moderation was done entirely by hand. This meant individuals had to remove offensive or abhorrent content and ban its authors. The job was time consuming and difficult, and very rarely paid any amount of money.

(The history of the online content moderator is an intensely important topic and you should see Kate Klonick’s writing for an intensely informed deep dive.)

The community understood that moderator discretion was the law of the land. It wasn’t always fair or even handed, but it was a good faith attempt to adhere to the site’s TOS (terms of service) and if you stepped out of line, you would be kicked off. Complaints about censorship and constitutional freedom of speech (which does not apply to a private service) were as common then as they are now, but it was also understood that the power of the moderator held the line against a community becoming toxic.

What we also knew at that time was that a toxic community was a breeding ground for behavior beyond bad manners. Unmoderated boards would nosedive into becoming a marketplace for stolen software, illegal pornography, and would instantly fill up with spam and hate speech. The slippery slope of censorship might be a convenient philosophical argument against moderation, but in the real world an unregulated space on the internet almost always slides into direct conflict with state and federal law.

The necessity of a moderated community wasn’t really called into question until sites began to monetize community discussion.

The first step into these dark waters began with article comments, which pulled community discussion off of message boards and onto the same page as the content itself. This presented a much more visible platform for anonymous users. It also increased ad revenue on the articles because engaged users would spend more time on the article, and articles that spawned more conversation would become more valuable. Controversial articles spawned more conversation. A marketplace was created that placed a high value on incendiary content both within the article and in response to it.

This also decentralized the message board. Where previously a moderator could visit one location to do their job the pace and location of user posts was now exponentially growing and scattered. For multiple reasons moderation became more difficult.

There was a common saying among writers at that time: “Don’t read the comments.” Some communities remained under control, but more frequently the comments were a minefield of hate and trash. At the same time “user engagement” was becoming a more important metric. For those of us that worked though that era, it isn’t difficult to view Facebook and Twitter as nothing more than article comments distilled into products.

This new focus on user participation could be baffling to writers (soon to be called content creators.) Trolls that were previously banned from message boards were now allowed to run rampant because of their value to the platform. The cultural importance of moderators dissolved in the wake of economic necessity and a wave of bad faith arguments about why anonymous people should be allowed to say awful things. Long story short, the trolls won. Users gained equal platform to content creators, if only slightly lower on the page.

This same scenario persists today with trolls have been signal boosted by the marketplace and the platform holders who are held hostage by the traffic generated by their most abusive users. The Trolls (now with a capital T) have built their own media networks that profit off the viral nature of awful content and then spread this content through legitimate ad buys.

The platform holders are in the same position of diving into bottomless pit of unmoderated content for boosted metrics. Except now the consequences have grown along with the traffic. The rise of the far right in Sweden, radicalizing propaganda in Brazil, and of course Russia’s backing of Trump are all fueled by purposely unregulated social platforms.

When someone like Jack Dorsey makes mealy philosophical arguments about free speech, he’s diverting the conversation in the same fashion as message board trolls pre-social media. Twitter polices hate speech in Europe because it is required by law. It ignores hate speech in American because it generates revenue. A Twitter employee admitted as much when they noted that hate speech identifying algorithms aren’t implemented in the US because they would probably implicate members of the Republican Party. These are very popular trolls.

How does this tie back to millennial call-out culture?

These platforms are the conduit for almost all interpersonal interaction and for many a primary source of revenue. In that sense, to forgo participation in social media places you outside of society. For many, YouTube, Facebook, and Twitter have gone from conversational tools to a necessary public utility. The toxicity isn’t just obvious, but an unavoidable part of every day life.

Due to this forced interaction with unregulated platforms, the call for moderation has been steadily growing louder. The platforms have been providing mostly lip service. Some claim to employ machine learning and algorithms, which sounds impressive, but have been ineffective in weeding out only the most obvious bots and spam. Reporting tools are often slow or unresponsive, and if they employ real people they are frequently in cheap foreign markets that have trouble understanding the cultural context of offensive content. Universally, terms of service remain vague, to attract and hold the largest possible user base and provide outs for popular users who walk the line of inappropriate behavior. The platform holders continue to abdicate responsibility in favor of profit.

For these reasons, the community must police itself. If anyone can be a troll without consequence, everyone must also become a moderator. Without a guiding TOS, self appointed moderators have to generate their own shifting and amorphous set of standards to operate against. When the platform holder refuses to ban, the community must resort to calling out.

This is probably where Bret Stephens gets frustrated with seemingly “touchy” millennial response to those who travel outside the boundaries of polite conversation. There are no hard rules, the penalties are ill-defined, and can seem unfair. But self appointed moderators don’t have access to the tools of the platform they work within. If Nazis and death threats are allowed equal space (more than equal when boosted by ad revenue) next to regular user interactions its understandable that user response to content that offends them might amplify over time.

Does calling out work? There are examples that when it leads to a ban, it does. Take for example Milo Yianopolis, a professional troll and former Breitbart hack who was only recently removed from Facebook and Twitter. He is now seemingly broke and advocating violence to get his accounts reinstated. It’s an extreme position that you probably didn’t hear about, which is because he was banned, a practice so rarefied that the term has been elevated to “deplatforming,” a word that makes a necessary punishment sound draconian.

Crucially, and another area that Bret glosses over, its important to remember the difference between call-out culture, cancelling, and deplatforming. Calling out or an attempting to cancel someone is a defense mechanism that has no guaranteed teeth. The community can band together to signal boost a message, but they are completely dependent on the platform to take action. Deplatforming is an actual penalty imposed by the platform. To complain about call-out culture singles out the powerless and ignores the companies that made our public discourse toxic and dangerous in the first place.

I don’t endorse all instances of call-out or cancel culture, but hopefully the reasoning above puts this phenomena into a greater context than the clumsy ramblings of Bret Stephens, and other pundits, who continue to waste editorial space lobbing tired and lazy insults at millennials while ignoring the important issues at hand.

Back when message boards were tightly moderated there was practice that could easily earn you a ban called “flame baiting.” This was a bad faith argument made by someone who was being purposely contrarian in order to enrage the community. It could be difficult to define a one-off instance of flame baiting, but trolls were often banned after a pattern of posts made directly at communities that would be offended by the content. Bret Stephens, a climate skeptic and a neo-con that has no consistent point of view other than “going against the grain” has made a career of flame baiting. A responsible moderator would have banned him a long time ago.

--

--