Senators and experts gather around transparency regulations for social media companies

0

Hours before Facebook announced that it would rebrand itself as “Meta” and focus on the so-called “metaverse,” the Senate Committee on Homeland Security and Government Affairs considered strategies to crack down on extremism on the streets. social media platforms.

Thursday’s hearing with a panel of external experts came amid renewed pressure on Capitol Hill to regulate social media companies following a series of damaging leaks about Facebook’s internal practices and knowledge that its site directs users to extremist content.

Dave Sifry, vice president of the Anti-Defamation League Center for Technology and Society, summed up the frustrations of many panel and committee members in his opening statement.

“Self-regulation clearly doesn’t work,” Sifry said. “Without regulation and reform, they will continue to focus on generating record profits at the expense of our safety and the security of our republic. “

Experts and several lawmakers in attendance seemed to agree that additional regulation would be needed to increase business transparency.

“It is not enough for companies to simply pledge to toughen measures against harmful content, these pledges have largely gone unfulfilled for several years now,” said committee chairman Gary Peters (D-MI) in his statement. opening statement. “Americans deserve answers about how the platforms themselves are designed to deliver specific content to certain users, and how this could distort user views and shape their behavior, online and offline. “

Several of the panelists proposed new regulations that would require social media companies to open up their internal systems and data – especially their recommendation algorithms that push content into user feeds – to private researchers and academics who could perform surveillance and publish their findings, while maintaining confidentiality. user data out of government hands.

“They have lost their right to secrecy,” said Nathaniel Persily, a Stanford University law professor who heads the school’s Cyber ​​Policy Center. “We are at a critical time when we need to know exactly what is going on on these platforms. “

Committee ranking member Rob Portman (R-OH) said he and Sen. Chris Coons (D-CT) are currently working on legislation that would impose such transparency requirements “so that we can all work together on solutions to those problems that all of us have identified.

Portman added that he sees the move as a necessary precursor to other regulatory initiatives.

“We really don’t know what we’re trying to regulate if there’s a lack of transparency as to what this design is or how these algorithms are derived,” Portman said.

Persily said he believes increased transparency will prompt platforms to change how they operate and eliminate alleged political bias in moderation of content – an issue raised by several Republicans on the panel.

“It will change their behavior if they know they are being watched. It’s not just about giving a grant to outside researchers to find something for their publications, ”he explained. “It’s about making sure someone is in the room to understand what’s going on.”

Several members of the expert panel argued that opaque corporate advertising practices are the main reason for the often uncontrolled proliferation of extremist content on social media.

“The basic mechanics of products like virality and mechanics are designed to keep you, your friends and family engaged,” Sifry said. “The problem is, the disinformation, hateful and polarizing content is very engaging. The algorithms therefore promote this content… these platforms exploit the propensity of people to interact more with the inflammatory content. Ultimately, these companies neglect our safety and security because it’s good for the bottom line.

He added that, fundamentally, Congress must “[create] systems that actually lead to a change in [companies’] incentive systems.

Several experts have also said Congress should reform or eliminate parts of Section 230 of the Communications Decency Act, the legal provision that protects websites from legal liability for content posted by their users. But not everyone seemed to agree on what these reforms should entail.

Mary Anne Franks, professor of law at the University of Miami and chair of the Cyber ​​Rights Initiative, argued that Section 230 protections should be limited “to speech protected by the First Amendment” and denied to platforms. “Who demonstrate a willful disregard for illegal content” which causes “foreseeable” harm.

Without changing corporate responsibility for user content, Franks added, “there is no real incentive for them to do anything.”

Sifry proposed to make liability protections conditional on companies acting in a “responsible” manner.

Lawmakers and pundits pointed to a range of other potential reforms during the hearing that appeared to find less support, including pushing platforms to focus on verifying user identities; mandate “circuit breakers” to slow the spread of extremist viral content; the creation of an independent, non-profit resource center to track extremism; the implementation of antitrust legislation targeting social media companies; and the adoption of other privacy laws relating to advertising.

Peters would be planning to call on officials from Facebook, Twitter, YouTube and TikTok to also testify before the committee on extremism and the recommendation algorithms of their sites.



Source link

Share.

About Author

Comments are closed.