Last week, Washington Post reporters Naomi Nix and Sarah Ellison published a piece titled, “Following Elon Musk’s lead, Big Tech is surrendering to disinformation.” Facebook, YouTube, and X have been “abandoning their most aggressive efforts to police online falsehoods,” they write. The reporters call attention to how employees are “now asked to spend more of their time figuring out how to minimally comply with a booming list of global regulations” rather than creating new ways to keep content trustworthy and free from abuse.
The negative effects of online abuse and disinformation on people are clear, so why do the largest platforms appear to be doing the bare minimum to keep people safe and why do they appear indifferent towards finding solutions? It comes down to a fundamental flaw in the way social media platforms have operated from the beginning: they believe engagement is everything.
For more than 20 years, major social media platforms have been built around a core revenue model: advertising and the sale of valuable user data. That approach has made platforms’ goals simple: find users who will add and engage with content; keep them there at all costs; bring in more users; and sell ads or data. Rinse and repeat.
Psychologists at Cambridge University have shown that negative posts garner more engagement than those that are more benign in nature. At the same time, advertisers are conditioned to seek out the largest, most engaged communities as a way to get their messages in front of the greatest numbers. As an unintended consequence, platforms are often incentivized to look the other way when things get hateful: anger, hate and abuse fuel the engagement metrics that advertisers want.
There is also evidence that the most “engaging” content is not necessarily the best for ad conversions, which is the main revenue-driver for most social platforms. Consumer behavior studies by the Association for Consumer Research and the Kellogg School of Management indicate that better positive environments and experiences are better drivers of conversions and purchasing decisions. Better moods lead to better recall of information, in ads as well as content, and more thoughtful consideration of new products and ideas.
Even though evidence proves this to be the case, it is hard to pivot companies away from entrenched practices. Efforts to address harm and abuse have historically happened at the margins, putting trust and safety teams at odds with platform leadership. It is no secret that actions such as removing accounts and discouraging sensationalist content can have a dampening effect on the engagement numbers needed to drive revenue. Compliance with global regulations like the EU Digital Service Act will, initially, only nominally help improve the experience of consumers until there is a new organizing principle for platforms.
There have been past efforts to look at metrics beyond engagement. Facebook, where I previously worked, explored a measurement called Meaningful Social Interactions that prioritized posts from friends over those that were simply viral. However, in response to falling engagement overall, MSI was quickly reverted to serve that old master: engagement. The Facebook Papers suggested it resulted in a deepening of echo chambers.
There are signs that brands and advertisers are beginning to question raw engagement as a focus. X has lost half of its advertising revenue and is now projected to lose roughly $2 billion in ad revenue this year—partly because the platform has become “a place where people can post racist, sexist, or otherwise harmful speech without much consequence,” as reported by Vox’s Shirin Ghaffary. (X does not appear to have responded to requests for comment on this topic.)
My hope is that the next generation of platforms will continue to push this shift forward and highlight the business value of engagement quality over quantity. I have started to see this approach work first-hand at T2, a social media site I founded with the goal of safety.
Hopefully, more platforms will recognize that a blind focus on engagement is not good for business. Brands have skin in this game, too. For their part, marketers must unlearn bad habits and broaden their thinking beyond raw engagement numbers. Instead, it has to be about the quality, not quantity of engagement.
X cannot quickly stamp out unchecked hate speech through regulatory checklist exercises, and attempts to do so will not bring advertisers back. What’s worse, the flood of negative press is adding to the company’s rapid brand erosion. Yes, Elon Musk should have seen the signs, but he’s not the only one who has been blinded by engagement metrics. It’s time for everyone to change their thinking.