Skip to main content

You Can’t Handle the Truth—at Least on Twitter

False information is about 70 percent more likely to be retweeted than faithful reports of actual events, researchers find

False information spreads much faster and farther than the truth on Twitter—and although it is tempting to blame automated “bot” programs for this, human users are more at fault. These are two conclusions researchers at Massachusetts Institute of Technology drew from their recent study of how news travels on the microblogging site. Their findings, published this week in Science, explain a lot about how conspiracy theories (as well as misleading and downright incorrect information) drown out hard, clear facts on social media.

False news—which the study defines as either inaccurate information presented as truth or opinion presented as fact—is on average about 70 percent more likely to be retweeted than information that faithfully reports actual events, the researchers found. They analyzed roughly 126,000 news stories tweeted by three million people between 2006 and 2017. Accurate stories rarely reached more than 1,000 people, yet the most prominent false-news items routinely reached between 1,000 and 100,000 people. Political news, in particular, spread more than three times faster than tweets about terrorism, natural disasters, science, urban legends or financial information.

When the researchers used an algorithm to weed out tweets likely posted and circulated by bots, both false and true news continued to circulate at the same rates. Bots did not significantly favor one type of news over the other, which suggests Twitter users themselves are largely behind the disparity between the ways fake and real news spread, says study co-author Soroush Vosoughi, a postdoctoral associate at the MIT Media Lab’s Laboratory for Social Machines (LSM).


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


“Human decision-making influences the spread of false news more than we thought,” says Sinan Aral, a professor of management at the MIT Sloan School of Management and a study co-author. Aral says this finding surprised him amid all the recent focus on bots—not just in the media but also during testimony before the Senate and House intelligence committees. “It’s important to understand the true impact of bots, because that will affect how we deal with the spread of false news,” he says.

Politics might be one motivation for spreading fake news. But a bigger problem may be people trying to make a buck in a social media advertising ecosystem that rewards stories for attracting the most eyeballs, says study co-author Deb Roy, LSM’s director. “We’re discovering that polarization is a great business model,” he says.

For its study, the MIT team perused six fact-checking Web sites—snopes.com, politifact.com, factcheck.org, truthorfiction.com, hoax-slayer.com and urbanlegends.about.com—for common news stories and rumors those sites had examined. “Then we looked for footprints of those stories on Twitter, including links to stories we investigated that were embedded in tweets, tweets about those stories without links, and photo memes related to those stories,” Vosoughi says.

False information is likely more widespread because it plays on salacious or controversial elements in ways the truth typically cannot, according to the researchers. “It’s easier to be novel and surprising when you’re not bound by reality,” Roy says. (Twitter is a source of funding for the LSM, but Roy says his lab is “enabled but not directed or directly influenced by Twitter”). Twitter did not immediately respond to a request for comment.

What about Facebook?

Despite their focus on Twitter, the MIT researchers say their findings likely apply to other social media as well. It is difficult to know for sure, because Twitter is one of the few platforms that shares the relevant data with the public. “There needs to be more cooperation between the platform makers and independent researchers, such as those from MIT,” says David Lazer, a professor of political science and computer and information science at Northeastern University who is familiar with—but did not participate in—the MIT Twitter study.

The ability to investigate more platforms is crucial to understanding the scope of social media’s false-news problem. Studies show more people get their news from Facebook than they do from Twitter, but it is difficult to say which site is more vulnerable to manipulation, Lazer says. On Twitter people are more likely to be exposed to a wider variety of users with different agendas, he says. “On Facebook you have people who are more likely to know one another sharing information, so it is possible the purpose of sharing is less to deceive than it would be on Twitter,” Lazer adds. Facebook declined to comment for this article.

“Facebook is clearly the 800-pound gorilla in this conversation, but they have been much less transparent than Twitter,” says Matthew Baum, a professor of global communications at Harvard University’s Kennedy School of Government. “Twitter matters, of course, and we can still learn a lot by studying dissemination patterns on that platform. But at the end of the day you’re going to have to find a way to work with Facebook.” Baum says he and Kennedy School colleagues are preparing to also study the potential role of platforms beyond social media, including WhatsApp and other direct-messaging tools.

False v. Fake

Baum and Lazer are part of a team that co-authored a separate article in Science this week about the impact of false and misleading information spread online, and potential ways to intervene against it. Unlike the MIT researchers—who avoided saying “fake news” and called the term “irredeemably polarized”—Baum, Lazer and their colleagues embraced it. There has been much debate over the phrase, “because Donald Trump and others have chosen to weaponize it,” Lazer acknowledges. “We share those concerns, but also realize any term describing this problem could be similarly weaponized.” Baum adds that, given the inherent ambiguity of the language involved—including terms such as fake news, false news, misinformation and disinformation—they preferred to use the words that so many people have come to associate with the problem.

Whatever the problem is called, solutions remain elusive, especially at a time when fact-checking sites themselves are often accused of bias. “People don’t like to be told that they are wrong, so they tend to find a way to counterargue their points even if they’ve been debunked—and then attribute bias to the fact-checking site that disagreed with them,” Baum says. Another problem is fact checking requires resurfacing false claims in order to debunk them, and people often remember the false information without recalling the context in which they read it. For that reason, Baum adds, “we have to find the best modality for fact checking, including where and how to present it.”