Specific language about the QAnon conspiracy theory has all but disappeared from mainstream public social media platforms, new research concludes.
Driving the news: Researchers from the Atlantic Council’s Digital Forensics Lab found that the volume of QAnon content available online plummeted following major moderation and policy moves from Google, Facebook and Twitter.
Details: Researchers analyzed more than 45 million mentions of QAnon catchphrases and related terms from April 1, 2020 to April 1, 2021 on both mainstream platforms and alternative ones such Gab and Parler.
- Terms included popular QAnon phrases including “the storm,” “the great awakening,” “save the children” and “WWG1WGA (Where we go one we go all).”
- Those terms started being used more frequently online in March 2020, peaked in June 2020 around racial justice protests, and spiked again before the January 6 Capitol riot.
Other factors contributed to the reduction in QAnon content.
- “Q,” the shadowy figure whose posts kicked off the conspiracy theory, went silent.
- Some participants in the Q world masked their phrases to evade getting moderated.
- Trump’s election loss dispirited many Q believers.
- “Of all factors… reductions correlated most strongly with social media actions taken by Facebook, Twitter, and Google to limit or remove QAnon content,” the researchers write. “Actions taken by Twitter after the January 6 attack on the Capitol correlates strongly with a dampening of what remained of traditional QAnon chatter at the time.”
The intrigue: While the volume of QAnon content on the right-wing-focused Parler and Gab networks did increase in late 2020 and again around the January Capitol attack, the researchers concluded that these sites did not absorb the QAnon conversations that mainstream platforms shut down.
- That was at least in part because the alternative sites faced their own cut-offs from back-end service providers like Amazon Web Services and Twilio.
Yes, but: The research did not assess the volume of QAnon discussion in private groups or messaging platforms.
- Public posts are where fringe groups gain new adherents, but private discussions are where their most dedicated followers end up.
What they’re saying: “Moderation actions after the Capitol attack were particularly effective in stomping down what remained of QAnon chatter online,” said Jared Holt, resident fellow at the Atlantic Council. “The data shows the companies didn’t act… until it was exploding off the charts.”
- “The research is very significant,” said Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights. “Concerted content moderation works… When they put their minds to it, the mainstream platforms can have a very big effect on marginalizing or eliminating toxic content.”
The bottom line: Aggressive content moderation aimed at limiting extremist content can work, but “decisions to enforce rules and address threats of extremism are often prompted by tragedy instead of proactive thinking,” said Holt.
Go deeper: Read the report
Source: https://www.axios.com/qanon-language-social-media-f47fe79e-d779-4851-a77e-7e374864c325.html
Droolin’ Dog sniffed out this story and shared it with you.
The Article Was Written/Published By: Ashley Gold
! #Headlines, #Axios, #FactCheck, #Internet, #Newsfeed, #QAnon, MAGA, #News, #Politics, #Protest, #Science, #Trump
No comments:
Post a Comment