Twitter Assessments New Self-Reporting Possibility for Doubtlessly Delicate Pictures and Movies in Tweets

Twitters test a new option This allows users to add their own sensitive content warning screens to images attached to tweets and provides another way to limit unwanted exposure to graphic content.

People use Twitter to discuss what’s going on in the world, which sometimes means sharing worrying or sensitive content. We’re testing an option for some of you to add one-time alerts to photos and videos you tweet to help out those who might want the alert. pic.twitter.com/LCUA5QCoOV

– Twitter security (@TwitterSafety) December 7, 2021

As you can see in this example, when you attach a photo or video to a tweet, you can now select a flag option from the three-dot function menu. From there, you can tick whether the image contains nudity, violence, or otherwise sensitive content that could help other users avoid unwanted exposure.

The warning screen will then contain a notice that the tweet author has tagged the content, with the image hidden behind a blurry window.

Twitter already has one Screening system for sensitive content to help users avoid such, but it is self-reporting while additionally using automation to detect violations, although this is not a perfect system. This additional measure provides more protection and could help further limit exposure as more people can report their own content to avoid potential restrictions or penalties for publication.

Twitter could also seek to expand the self-disclosure rationale in the future to add further value to the tool, and any action that increases user safety by reducing exposure can only be positive.

The new function is currently being tested with some users.

Leave a Reply

Your email address will not be published. Required fields are marked *