Study shows users banned from social platforms go elsewhere with increased toxicity

0

Binghamton University

When people act like fools on social media, one permanent response is to ban them from re-posting. Take away the digital megaphone, the theory goes, and hurtful or dishonest messages from these troublemakers will no longer be a problem there.

What happens after that, however? Where do those who have been “deformed” go and how does this affect their behavior in the future?

Assistant Professor Jeremy Blackburn, Department of Computer Science, Thomas J. Watson College of Engineering and Applied Sciences Image credit: Jonathan Cohen.

An international team of researchers – including Assistant Professor Jeremy Blackburn and PhD candidate Esraa Aldreabi of the Computer Science Department at the Thomas J. Watson College of Engineering and Applied Science – are exploring these questions in a new study titled “Understanding the Effect of deplatform on social networks. ”

The research, carried out by iDRAMA Lab collaborators at Binghamton University, Boston University, University College London and the Max Planck Institute for Informatics in Germany, was presented in June at the conference ACM Web Science 2021.

Researchers developed a method to identify accounts belonging to the same person on different platforms and found that the ban on Reddit or Twitter led these users to join alternative platforms such as Gab or Parler where content moderation is more. lax.

One of the conclusions is also that while users who switch to these smaller platforms have a potentially reduced audience, they exhibit a higher level of activity and toxicity than before.

“You can’t just ban these people and say, ‘Hey, that worked.’ They don’t go away, ”Blackburn said. “They go elsewhere. This has a positive effect on the original platform, but there is also a certain degree of amplification or aggravation of this type of behavior elsewhere.

Learn more about Blackburn

The deplatform study collected 29 million messages from Gab, which launched in 2016 and currently has around 4 million users. Gab is known for his far-right base of neo-Nazis, white nationalists, anti-Semites and QAnon conspiracy theorists.

Using a combination of machine learning and human tagging, researchers crossed profile names and content with users who had been active on Twitter and Reddit but were suspended. Many of those who are cross-platform reuse the same profile name or same user information on a different platform to ensure continuity and recognition with their followers.

“Just because two people have the same name or the same username is no guarantee,” Blackburn said. “There was a pretty big process of creating a ‘truth on the ground’ dataset, where we had a say, ‘It must be the same people because of that reason and that reason. This allows us to change things by throwing them into a machine learning classifier. [program] who will learn the characteristics to look out for.

The process was no different from how researchers determine the identity of authors for unattributed or pseudonymous works, checking style, syntax and subject matter, he added.

In the dataset analyzed for this study, approximately 59% of Twitter users (1,152 of 1,961) created Gab accounts after their last activity on Twitter, likely after their account was suspended. For Reddit, about 76% (3,958 out of 5,216) of suspended users created Gab accounts after their last post on Reddit.

Comparing content from the same users on Twitter and Reddit versus Gab, users tend to become more toxic when suspended from one platform and forced to switch to another platform. They also become more active, increasing the frequency of messages.

At the same time, the content audience of Gab users is reduced by the reduced size of the platform compared to the millions of users on Twitter and Reddit. This can be seen as a good thing, but Blackburn warned that much of the planning for the Jan.6 attack on the U.S. Capitol took place on Speak, a Gab-like platform with a base of smaller users who orient themselves towards the right and the far right.

“Reducing the scope is probably a good thing, but the scope can be easily misinterpreted. Just because someone has 100,000 subscribers doesn’t mean they’re all real-world subscribers, ”he said.

“The hardcore group, maybe the group that concerns us the most, is the one that probably stays with someone if they move somewhere else online. If by reducing this range you increase the intensity to which the people who stay are exposed, it is like a question of quality versus quantity. Is it worse to have more people seeing this stuff? Or is it worse to have more extreme things produced for fewer people? “

A separate study, “A Large Open Dataset from the Parler Social Network”, also included Blackburn among researchers at New York University, University of Illinois, University College London, Boston University and the Max Planck Institute.

Featured at the AAAI Web and Social Media Conference last month, it analyzed 183 million Talking posts made by 4 million users between August 2018 and January 2021, along with metadata from 13.25 million profiles. users. Data confirms that Parler users – who briefly closed and were removed from Apple and Google app stores in response to the Capitol Riot – have overwhelmingly supported President Donald Trump and his “Make America Great Again” program .

“Regardless of what Parler might have said, publicly or not, it was very clearly white, right-wing, Christian Trump supporters,” Blackburn said. “Again, unsurprisingly, he got his biggest boost in the 2020 election – up to a million users joined. Then around the Capitol attack, there was another big increase in the number of users. What we can see is that it was very clearly used as a tool for organizing the insurgency. “

So if banning users is not the correct answer, what is it? Reddit admins, for example, have a “shadow ban” ability that lets annoying users think they’re still posting to the site, except no one else can see them. During the 2020 election and the COVID-19 pandemic, Twitter added content moderation tags to tweets that deliberately spread misinformation.

Blackburn is unsure of all the moderation tools available on social media platforms, but he thinks there needs to be more “socio-technical solutions to socio-technical problems” rather than just an outright ban. simple.

“The company is now saying pretty firmly that we can’t ignore this stuff – we can’t use the easy fixes anymore,” he said. “We need to come up with more creative ideas so that we don’t get rid of people, but hopefully push them in a positive direction or at least make sure everyone knows who that person is. Somewhere between unhindered access and banning everyone is probably the right solution. “

/ Public distribution. This material is from the original organization and may be ad hoc in nature, edited for clarity, style and length. See it in full here.


Source link

Share.

About Author

Comments are closed.