Pin It
1327924

Is demystifying the algorithm the key to safer social media?

Under fire for promoting harmful content to vulnerable teens, TikTok has announced plans to make its recommendations more transparent

TextThom WaiteIllustrationCallum Abbott

The algorithms that decide what we see when we log onto any given social media platform are notoriously opaque, and for the most part we allow them to guide our scrolling habits with very little thought or intervention. Why is Twitter so insistent that I follow art history accounts with Roman bust profile pictures and dubious intentions? Why is Instagram’s Explore page so out-of-touch with my actual interests? Why does TikTok exclusively feed me videos of dogs making friends with unlikely animals? OK, that last one makes sense – as for the rest, it’s a mystery.

It’s become clear, however, that there are some dark side effects to the algorithms that Big Tech uses to “manufacture serendipity” in our online lives. Last week (December 14), for example, the Center for Countering Digital Hate published a report titled “Deadly By Design”, which revealed that TikTok’s “thermonuclear algorithm” directs content related to self-harm and eating disorders to vulnerable users.

Researchers from the CCDH created TikTok accounts for fictional 13-year-olds across four separate countries (the US, UK, Australia, and Canada) to test how they were targeted by the app’s algorithms. Each new account watched 30 minutes of algorithmically-recommended content from its ‘For You’ page, liking any videos related to body image, mental health, or eating disorders. Reportedly, eating disorder and self-harm content was recommended to the fictional teens within minutes of making an account. Accounts considered “vulnerable” were also targeted with 12 times as many self-harm videos as “standard” teen accounts.

It isn’t just TikTok. The new research echoes a lawsuit against Meta launched in June this year, which claims that Instagram’s “addictive” algorithm caused a preteen girl to develop an eating disorder, self-harm behaviours, and suicidal ideation, partly by pushing “thinspo” or “thin-spiration” content to her Explore page. This follows Instagram admitting that it promoted pro-eating disorder content to teens, back in 2021.

Of course, it’s no secret that social media companies’ platforms are inherently addictive, and push polarising content in order to boost engagement and, in turn, ad revenue. But the dangerous real-world impact, especially on young users, is now seemingly undeniable. So what are we – or, more importantly, the tech companies themselves – going to do?

In the CCDH report, the organisation lays out recommendations to help cultivate a safer online environment, including “proactive, informed enforcement” of eating disorder content and coded hashtags that are used to share it, and legislation to hold social media companies accountable for the content their algorithms promote. Notably, algorithmic transparency comes top of the list, with the CCDH recommending: “TikTok must provide full transparency of its algorithms and rules enforcement, or regulators should step in and compel the platform to do so.”

Shortly after the report was published, TikTok actually announced a new feature that aims to shed some light on its algorithm, which began rolling out on Tuesday (December 20). The feature – which appears as a question mark icon on your FYP – tells users why they were recommended a certain video, citing factors such as previous interactions, content the user has recently posted, or content that is popular in the user’s region.

“This feature is one of many ways we’re working to bring meaningful transparency to the people who use our platform,” says TikTok in a press release. “Looking ahead, we’ll continue to expand this feature to bring more granularity and transparency to content recommendations.”

Could transparency alone be the key to making the platform safer? Maybe not – users trapped in harmful content cycles probably aren’t going to care as much about how they got there, as about what they’re being shown. However, demystifying the algorithm may make it easier to diagnose how harmful content is suggested, and to develop fixes and legislation as a result.

Speaking of toxic feedback loops on social media, Elon Musk has spoken out against the opaque algorithms that determine our content intake on several occasions, warning: “You are being manipulated by the algorithm in ways you don’t realise.” Months before he bought Twitter for $44 billion at the end of October, he also floated the idea of making the platform’s recommendation algorithm completely open source, which would be a big step toward unpicking its effect on our brains. 

Shortly after the deal was finalised, Musk appeared to double down on the plan. In his inaugural statement, he said that he wants “to make Twitter better than ever” by, among other things, “making the algorithms open source to increase trust”. Unfortunately, the open source idea appears to have been lost in the ensuing chaos, although Elon has had time to ban his impersonators, introduce a disgusting new colour scheme, and run polls about whether he’s fit to lead the company (answer: no lol). Hopefully he can fulfil his promise and make some strides for transparency before he finds a CEO “foolish enough” to take his place.