Following its crackdown on drawings and memes depicting self-harm, Instagram has announced that it will start using an AI system that will warn users when their captions on a photo or video can be considered offensive.
The social media platform announced the feature, which will be rolled out immediately in some countries, in a blog post this week. The idea is to give users a “chance to pause and reconsider their words”.
If a user types an offensive caption, they will receive a pop-up notification or “nudge”, informing them that what they’re saying is similar to other content reported for bullying. Users will then be given the option to edit their captions before it’s published.
“Results have been promising and we've found that these types of nudges can encourage people to reconsider their words when given a chance,” Instagram wrote in the blog post.
“In addition to limiting the reach of bullying, this warning helps educate people on what we don't allow on Instagram and when an account may be at risk of breaking our rules.”
The move is part of a wider initiative by Instagram to crack down on online bullying, which includes the adding of sensitivity screens to blur images of self-harm. In June, the platform’s head Adam Mosseri admitted that its “too slow” in addressing harmful content. “We were under-focused on the downsides of connecting people. Technology is not good or bad, it just is,” he explained.
You can read Instagram’s full blog post here.