While it’s always been hot on censoring nudity and anything mildly sexual, Instagram has been pretty lax when it comes to tackling trolls. After admitting the app is ‘too slow’ in addressing harmful content last month, Instagam’s head Adam Mosseri has now announced the roll out of the platform’s new anti-bullying features.
Powered by artificial intelligence, the first feature aims to ‘encourage positive interactions’, notifying users when their comment is considered offensive. Before the comment is posted, Instagram asks: ‘Are you sure you want to post this?’ The user must then reflect on what they’ve written before deciding whether or not to go ahead with sharing it – the person on the receiving end will only be notified if the original person (AKA bully) decides to confirm their comment.
Although many trolls are unlikely to be deterred by a prompt, it’s a good way to decrease comments-of-passion by forcing users to think twice about something they might later regret.
The second feature is stronger than the first, giving the power back to the bullied. Called Restrict, it will enable users to protect their account from unwanted interactions, without going as far as blocking someone.
“We’ve heard from young people in our community that they’re reluctant to block, unfollow, or report their bully because it could escalate the situation,” Mosseri said in a statement. “Some of these actions also make it difficult for a target to keep track of their bully’s behaviour.”
Once a person is restricted by an account, they will no longer be able to see when that user is online, or when they’ve read their DMs. Comments by restricted people will also only be visible to that person, and must be approved by the original user before other people can see them. The restricted person won’t know that they’re comments are hidden from the public – unless they’re professional bullies and have a finsta especially for trolling.
Although Restrict isn’t being rolled out just yet – Mosseri confirmed it would be coming “soon” – it’s definitely a step in the right direction to curb the platform’s bullying problem. By stripping trolls of their power, without them even knowing, it will enable users to feel more in control of their account.
The platform has been under mounting pressure to crack down on harmful content since the tragic suicide of 14-year-old Molly Russell in 2017. Earlier this year, Mosseri met with UK health secretary Matt Hancock to discuss security and safety on the platform, after Russell’s father publicly held Instagram accountable for his daughter’s death. These newly announced updates also follow February’s roll out of ‘sensitivity screens’, which hide certain images – including those depicting self harm – from vulnerable young people.
“It’s our responsibility to create a safe environment on Instagram”, Mosseri concluded. “This has been an important priority for us for some time, and we are continuing to invest in better understanding and tackling this problem.”