The feature is designed particularly for public figures experiencing a sudden influx of negative attention.
Instagram is rolling out a new safety feature called Limits, which allows users to filter comments and DMs “during spikes of increased attention”.
Limits can be used to automatically hide message requests and comments from accounts who aren’t followers, or who just followed a user recently. Interactions from these accounts remain hidden unless the user approves them. The feature is being made available worldwide from today (11 August).
Announcing the move, Adam Mosseri, head of Instagram, said: “We developed this feature because we heard that creators and public figures sometimes experience sudden spikes of comments and DM requests from people they don’t know. In many cases this is an outpouring of support — like if they go viral after winning an Olympic medal. But sometimes it can also mean an influx of unwanted comments or messages.”
Mosseri pointed to the racist abuse directed towards football players after the Euro 2020 final as an example of the kind of situation Limits is designed to address.
“Creators also tell us they don’t want to switch off comments and messages completely; they still want to hear from their community and build those relationships,” he added. “Limits allows you to hear from your longstanding followers, while limiting contact from people who might only be coming to your account to target you.”
Mosseri also said that Instagram is “exploring” ways for the app to automatically detect big spikes of interaction on public accounts in order to suggest to users that they use Limits.
Alongside Limits, the Facebook-owned app also announced that it would be strengthening its warnings against “potentially offensive comments”. Users are already shown a cautionary message if they try to post something the app has flagged as possibly harmful, and a stronger warning if they try to do so repeatedly. Now, the stronger warning will be delivered the first time.
This warning reminds users of the app’s Community Guidelines and cautions them that their comment may be hidden or deleted. Instagram said it has found that these warnings “really discourage people from posting something hurtful”.
The app is also expanding the list of words and phrases covered by its Hidden Words feature, which was announced in April. Instagram said it will be continuing to update the list “frequently” and has added a secondary option allowing users to additionally filter phrases that may be “potentially harmful” but not in violation of the app’s rules.
Twitter has reportedly been considering similar features recently.
Facebook, which owns Instagram, last month launched its Women’s Safety Hub to collect resources for women in the public eye who experience online abuse.
Mosseri went on to say that Instagram knows there’s “more to do”, including improving systems to “find and remove abusive content more quickly and holding those who post it accountable”. He said the app would work with industry, governments and NGOs to “educate and help root out hate”.
“This work remains unfinished, and we’ll continue to share updates on our progress.”
Article Source: Silicone Republic