Twitter is testing a revamped reporting process to make it easier for users to alert the platform of harmful behaviour.
The “overhauled” reporting process is meant to simplify the reporting process.
“Twitter receives millions of reports; everything from misinformation and spam, to harassment and hate speech, it’s their way of telling Twitter, hey this isn’t right or I don’t feel safe. Based on user feedback, research, and an understanding that today’s reporting process wasn’t making enough people feel safe or heard, the company decided to do something about it,” Twitter said in a blog post.
Twitter rolls out redesigned misinformation warning labels
“The new approach, which is currently being tested with a small group in the US, simplifies the reporting process. It lifts the burden from the individual to be the one who has to interpret the violation at hand. Instead it asks them what happened,” it said.
As part of the process which follows a “symptoms-first,” method, Twitter will first ask the user what’s going on.
“The idea is, first let’s try to find out what’s happening instead of asking you to diagnose the issue,” it explained.
Valuable data inputs for Twitter
“In moments of urgency, people need to be heard and feel supported. Asking them to open the medical dictionary and saying, ‘point to the one thing that’s your problem’ is something people aren’t going to do,” said Brian Waismeyer, a data scientist on the health user experience team that spearheaded this new process.
“If they’re walking in to get help, what they’re going to do well is describe what is happening to them in the moment,” added Waismeyer.
Twitter rolls out the option to Super Follow to Android users globally
“What can be frustrating and complex about reporting is that we enforce based on terms of service violations as defined by the Twitter Rules,” said Renna Al-Yassini, Senior UX Manager on the team.
“The vast majority of what people are reporting on fall within a much larger gray spectrum that don’t meet the specific criteria of Twitter violations, but they’re still reporting what they are experiencing as deeply problematic and highly upsetting,” Al-Yassini added.
The microblogging platform is hoping to improve the quality of reports by refocusing on the experience of the person reporting the tweet and getting more first-hand information about the incident. This can help it better understand how people are experiencing certain content and, in turn, be more precise when it comes to addressing it or ultimately removing it.
“This rich pool of information, even if the tweets in question don’t technically violate any rules, still gives Twitter valuable input that they can use to improve people’s experience on the platform,” it said.
How it works
After a user reporting a violation describes what happened, Twitter will present them with the Terms of Service violation they think might have occurred, at which point it will ask: Is that right? If not, the person can say so, which will help signal to Twitter that there are still some gaps in the reporting system.
“All the while Twitter is gathering feedback and compiling learnings from this chain of events that will help them fine tune the process and connect symptoms to actual policies. Ultimately, it helps Twitter take appropriate action,” it explained.
The new process will be rolled out to a wider audience next year.
Separately, the platform is also testing an option for users to add one-time warnings to photos and videos where relevant.
“People use Twitter to discuss what’s happening in the world, which sometimes means sharing unsettling or sensitive content. We’re testing an option for some of you to add one-time warnings to photos and videos you tweet out, to help those who might want the warning,” Twitter wrote from its official Twitter Safety account.