Bringing the lessons of cybersecurity to the fight against misinformation | MIT News


Mary Ellen Zurko recalls the feeling of disappointment. Shortly after earning her bachelor’s degree from MIT, she held her first position as a secure computer systems evaluator for the US government. The goal was to determine whether the systems complied with the “Orange Book,” the government’s authoritative manual on cybersecurity at the time. Were the systems technically secure? Yes. In practice? Not really.

“There were no concerns about whether the security requirements placed on end users were realistic,” Zurko says. “The notion of a secure system was about technology and assumed perfect, obedient humans.”

This discomfort set her on a path that would define Zurko’s career. In 1996, after returning to MIT for a master’s degree in computer science, she published an influential paper introducing the term “user-centric security.” It has become a field in its own right, concerned with ensuring that cybersecurity is balanced with usability, otherwise humans could bypass security protocols and give attackers a foot in the door. The lessons of usable security now surround us, influencing the design of phishing warnings when we visit an insecure site or the invention of the “strength” bar when we type in a desired password.

Now a cybersecurity researcher at MIT Lincoln Laboratory, Zurko is still entangled in humans’ relationship with computers. Its focus has shifted to technology to counter influence operations or attempts by foreign adversaries to deliberately spread false information (disinformation) on social media, in an attempt to disrupt American ideals.

In a recent editorial published in IEEE Security and Privacy, Zurko argues that many “human problems” in usable security have similarities to problems in countering misinformation. To some extent, she faces a similar endeavor to her early career: convincing her peers that these human issues are also cybersecurity issues.

“In cybersecurity, attackers use humans as a means to subvert a technical system. Disinformation campaigns are meant to impact human decision-making; they’re sort of the ultimate use of cybertechnology to subvert humans,” she says. “Both use computer technology and humans to achieve a goal. Only the goal is different. »

Get ahead of influence operations

Research on countering online influence operations is still young. Three years ago, Lincoln Laboratory launched a study on the subject to understand its implications for national security. The field has since exploded, especially since the spread of dangerous and misleading Covid-19 claims online, perpetuated in some cases by China and Russia, as found in a RAND study. There is now dedicated funding through the Laboratory’s Office of Technology to develop influence ops countermeasures.

“It is important for us to strengthen our democracy and make all our citizens resistant to the kinds of disinformation campaigns targeted by international adversaries, who seek to disrupt our internal processes,” Zurko says.

Like cyberattacks, influence operations often follow a multi-step path, called a kill chain, to exploit predictable weaknesses. Investigating and strengthening these weaknesses can work in countering influence operations, just as it does in cyber defense. Lincoln Laboratory’s efforts are to develop technology to support the “source trend” or strengthen the early stages of the chain of destruction as adversaries begin to find opportunities for a divisive or misleading narrative and create accounts to amplify it. Source tracing helps warn US Info Ops personnel of a brewing disinformation campaign.

Some approaches in the laboratory target the maintenance of sources. One approach is to leverage machine learning to study digital personas, with the goal of identifying when the same person is hiding behind multiple malicious accounts. Another area focuses on creating computer models that can identify deepfakes or AI-generated videos and photos created to mislead viewers. Researchers are also developing tools to automatically identify which accounts have the most influence on a story. First, the tools identify a narrative (in one paper, researchers studied the disinformation campaign against French presidential candidate Emmanuel Macron) and collect data related to that narrative, such as keywords, retweets, and likes. Then they use an analytical technique called causal network analysis to define and rank the influence of specific accounts – which accounts often generate posts that go viral?

These technologies are fueling the work that Zurko is leading to develop a testbed for counter-influence operations. The goal is to create a safe space to simulate social media environments and test counter technologies. More importantly, the testbed will allow human operators to be put in the loop to see how well new technologies help them do their jobs.

“Our military’s information operations personnel lack a way to measure impact. By setting up a testbed, we can use several different technologies, in a repeatable way, to develop metrics that allow us to see if those technologies actually make operators more effective at identifying a disinformation campaign and the actors behind it. -this.

This vision is still ambitious as the team builds the test bed environment. Simulating social media users and what Zurko calls the “grey cell,” unwitting participants in online influence, is one of the biggest challenges in mimicking real-world conditions. Rebuilding social media platforms is also a challenge; each platform has its own policies for dealing with misinformation and proprietary algorithms that influence the reach of misinformation. For instance, The Washington Post reported that Facebook’s algorithm gave “extra value” to news that elicited angry reactions, making it five times more likely to appear on a user’s News Feed – and such content is likely to include disproportionately erroneous information. These often hidden dynamics are important to replicate in a testbed, both to study the spread of fake news and to understand the impact of interventions.

Take a comprehensive systems approach

In addition to building a testbed for combining new ideas, Zurko is also advocating for a unified space that disinformation researchers can call their own. Such a space would allow researchers in sociology, psychology, politics and law to come together and share cross-cutting aspects of their work alongside cybersecurity experts. The best defenses against misinformation will require this diversity of expertise, Zurko says, and “a comprehensive systems approach to both human-centered and technical defenses.”

While that space doesn’t exist yet, it’s likely on the horizon as the field continues to grow. Influencer operations research is gaining momentum in the world of cybersecurity. “Just recently, top conferences have started including disinformation research in their call for papers, which is a real indicator of where things are headed,” Zurko says. “But, some people still cling to the old-school idea that messy humans have nothing to do with cybersecurity.”

Despite these feelings, Zurko still trusts her early observations as a researcher — what cybertechnology can do effectively is moderated by how people use it. She wants to continue designing technology and approaching problem solving in a way that puts the human at the center of the frame. “From the start, what I loved about cybersecurity was that it was part mathematical rigor and part sitting around the ‘campfire’ telling stories and learning. each other,” says Zurko. Disinformation derives its power from the ability of humans to influence each other; this ability can also be the most powerful defense we have.


About Author

Comments are closed.