But gray areas in those rules and uneven enforcement of them make it unclear how, or even if, executives at those companies would decide to remove them.
As of Saturday, Twitter continued to host tweets with the Pentagon’s documents, some of which had been up since at least Wednesday. There is no indication that Elon Musk, who bought Twitter nearly six months ago, will take any action against the tweets with the classified documents.
Two days earlier, Musk seemed to respond sarcastically to a tweet about the leaked material. “Yeah, you can totally delete things from the Internet – that works perfectly and doesn’t draw attention to whatever you were trying to hide at all,” he wrote.
On Discord, which is a messaging platform popular with video game players, the Pentagon documents may have been circulating as early as March. Since Discord chat groups – known as servers – are not directly managed by the company as a Facebook or Twitter feed is, the distribution of the Pentagon documents would have been difficult to spot.
Musk did not respond to a request for comment Saturday, and Discord declined to comment. It is not known if the companies, which are both based in the United States, have been asked to remove the Pentagon material.
Discover the stories of your interest
In the past, Twitter may have removed the material under rules that prohibit the publication and distribution of hacked materials, two former executives told The New York Times. Under this policy, Twitter would remove tweets with “real or synthesized hacked materials” or place warning labels on the material. Some of the Pentagon material circulating on social media may have been manipulated. But there were caveats to Twitter’s rules, as they were described in a policy document, which was last updated in October 2020. The rules allowed for exceptions for material that forms the basis for reporting by news agencies. And debates inside social media companies about what to allow online have often been similar to discussions that traditional media have about whether leaked or hacked material is of enough public interest to justify publishing.
It was not clear Saturday whether the Pentagon material was hacked or intentionally leaked – the images circulating appeared to be photographs of documents. The documents could fall into a gray area that, at least in the past, would have led to discussion among compliance officers inside the company about whether they qualified for a takedown.
Twitter used its hacked-material policy to block the circulation of an article in October 2020 from the New York Post that said the FBI had seized a computer that purportedly belonged to Joe Biden’s son Hunter. Twitter’s leaders, including then-CEO Jack Dorsey, later called the decision a mistake.
The former executives, who spoke to the Times on the condition of anonymity for fear of retribution from Musk, said Twitter often received reports of potential violations of its polices from U.S. government organizations.
But since acquiring the company in October, Musk has shrunk the groups responsible for moderation and more than 75% of Twitter’s 7,500 employees have been fired or have left. Ella Irwin, Twitter’s head of trust and safety, did not immediately respond to requests for comment.
Twitter has removed or prevented the circulation of content at the behest of governments such as India and on Musk’s whims.
This past week, Twitter also began regulating the circulation and engagement of links to Substack, a newsletter platform, after the startup unveiled a Twitter-like service. On Friday, many Substack writers found that tweets that had links to their Substack pages could not be liked or retweeted.
Discord surged in popularity during the pandemic, moving beyond its video game roots. By late 2021, the platform had more than 150 million active users each month.
Discord provides so-called servers that are essentially chatrooms, where people can discuss their hobbies and message with one another or join audio calls. Some servers are public and contain thousands of people, while others – such as servers made just for a group of friends – are private.
This arrangement has enabled Discord to thrive but has also led to problems with harmful content. Ensuring that Discord users follow the platform’s policies and refrain from posting inappropriate or questionable material has largely been left up to the individuals who create the servers, some of whom deputize members of the server communities to help enforce rules.
The private nature of some of these groups means they can easily escape detection or moderation.
In 2017, white nationalists organized the “Unite the Right” rally in Charlottesville, Virginia, on far-right Discord servers. Company executives were aware the white nationalists were using the platform but did not remove them until after the rally.
Discord said it had since beefed up its content-moderation team, and the company’s CEO, Jason Citron, said in a 2021 interview that 15% of his employees worked on trust and safety teams.
Still, the company did not discover Discord messages in a private server posted by the shooter who killed 10 people at a grocery store in Buffalo, New York, last spring. In the messages, the shooter posted racist remarks and appeared to detail how he planned to carry out the attack. After the shooting, Discord said it was investigating the postings and working with law enforcement agencies.
In its most recent transparency report, covering the last three months of 2022, Discord said it had disabled more than 150,000 accounts for policy violations that ranged from “harassment and bullying” to “exploitative and unsolicited content.” The number of accounts it had disabled was a 17% decrease from the three months before that, the company said.
This article originally appeared in The New York Times.