The Economics of Misinformation
Why does it continue to spread despite people knowing the potential harm?
It’s election season again, and like clockwork, misinformation is flooding our feeds. Despite growing awareness of the dangers of fake news, false claims by political candidates and their supporters keep spreading. This barrage of misinformation isn’t just annoying—it’s damaging. It strains relationships, widens social divides, and erodes trust in institutions. And yet, here we are, seeing the same patterns repeat. Why?
While many conversations focus on the harm misinformation causes, few step back to ask: Why does misinformation continue to spread, even when we know better? The answer lies in economics. Misinformation goes beyond just bad actors or gullible users—it’s a market. There’s supply, there’s demand, and powerful incentives are encouraging its production. And just like in any market, those incentives drive behavior in surprising ways. If we want to tackle misinformation, we need to understand why people keep engaging with it—and why our efforts to stop it don’t often work.
The Harm of Misinformation
Before we look at the market for misinformation, let’s look at the damage it causes on both personal and societal levels. At the individual level, misinformation can wreck relationships, reinforce stereotypes, and lead to bad decision-making. False claims about vaccines may convince people to skip life-saving medical treatments or engage in risky behaviors. In politics, misinformation can shape voter opinions based on lies, leading to elections and policies that don’t reflect an informed public but rather a manipulated one.
On a societal level, the consequences run even deeper. Misinformation breaks down trust—not just in media but in the very institutions that hold our democracy together. When false information about candidates or policies circulates, it undermines confidence in elections, fuels polarization, and pushes people into ideological echo chambers. This eroding trust creates a vicious cycle: as people become more skeptical of legitimate sources, they turn to even more questionable ones, making them more vulnerable to the next wave of misinformation.
The Market for Misinformation
Misinformation operates just like a commodity in most other markets, with both supply and demand. On the supply side, you have producers—people or groups spreading false content. Whether it’s for political gain, profit, or simply attention, the incentives are clear: misinformation spreads faster and wider than the truth. Why? It’s cheaper, easier, and often more emotionally charged than verified information. Sensationalism wins clicks, likes, and shares, all of which can be monetized, either directly through ad revenue or indirectly through increased social influence. Traditional media outlets pay for fact-checking; misinformation peddlers? Not so much. This creates an upward-sloping supply curve—the more misinformation gets shared, the more incentive there is to produce more of it.
But let’s not forget about the demand side. People consume and share information because it scratches a psychological itch. Sharing true information brings social validation, but sharing misinformation can often do the same—without the annoying burden of checking facts. And therein lies the problem: verifying information is costly. It takes time, effort, and, in some cases, the uncomfortable step of confronting beliefs that may be wrong. Social media’s fast pace doesn’t help; when there’s so much content flying by, fact-checking takes a backseat to convenience. This leads to an imbalance—where the cost of verification feels greater than the benefit of being sure, leaving the door wide open for false content to thrive.
Even when social media platforms introduce fact-checkers or flag misinformation, these measures often fall flat. Why? The economic incentives to share haven’t changed. Lowering the cost of verification (say, by making fact-checked articles easier to access) should, in theory, reduce misinformation. But in practice, it can sometimes backfire. For instance, users who were previously too cautious to share may now feel protected by fact-checking tools, even though their verification is sporadic at best. The net result? The pass-through rate of misinformation may remain stubbornly high, and the spread of false content continues, virtually unchanged.
The Virality Problem
Misinformation also benefits from network externalities, where the value of a product or service increases as more people use it. On social media, this means the more a piece of content—whether true or false—is shared, the more valuable it becomes in terms of visibility and engagement. Each like, retweet, or share acts as a signal to others that this content is worth their time, regardless of its accuracy. And that’s where the amplification kicks in: as more people interact with misinformation, it snowballs, spiraling beyond the reach of fact-checkers and interventions.
And then the site’s algorithms come into play. Social media platforms are designed to maximize engagement, so they prioritize content that generates clicks, comments, and shares. The more interaction a post gets, the more likely it is to show up in someone else’s feed, creating a self-reinforcing loop. This is why you’ll often see sensational or misleading content popping up again and again—it’s being rewarded by the platform’s engagement-based algorithms.
Even when platforms flag or remove misinformation, it’s often too late. By the time corrective measures are implemented, the content has already been widely shared, reinforcing the idea that, on social media, misinformation has a head start. The initial momentum—driven by those network externalities—ensures that false content reaches an audience before any flagging or fact-checking measures can slow it down. Essentially, the very structure of social media creates fertile ground for misinformation to flourish.
Final Thoughts
Ultimately, misinformation persists because it’s profitable—both financially and socially—for those who produce and share it. Combating misinformation requires more than just fact-checking or removing false content—it requires addressing the underlying incentives that drive people to share it in the first place. One approach is to increase the cost of sharing misinformation by introducing friction into the process. For instance, platforms could prompt users to verify a post’s accuracy before sharing it, making impulsive spreading less likely.
Encouraging social responsibility is also important. If sharing misinformation becomes socially stigmatized, much like spreading rumors or plagiarizing, users will think twice before hitting “share.” Platforms can play a role here by promoting positive examples of responsible sharing and creating incentives, such as badges, for users who consistently spread accurate information.
Diversifying sources of information can also help limit the impact of misinformation. Relying less on social media as a primary news outlet and seeking out diverse perspectives can reduce the risk of falling into echo chambers, where misinformation thrives unchecked. Addressing the economic incentives behind misinformation is essential if we want to limit its spread. Until those incentives shift, fact-checking and moderation will only have limited success.
Thank you for reading Monday Morning Economist! This free weekly newsletter explores the economics behind pop culture and current events. This newsletter lands in the inbox of more than 5,400 subscribers every week! You can support this newsletter by sharing this free post or becoming a paid supporter:
Three-quarters of Americans (75%) are very confident (23%) or somewhat confident (52%) in their ability to tell real news from fake news [YouGov]
53% of survey respondents said misinformation spread by artificial intelligence will impact who wins the 2024 election [Axios]
More than half of adults say misinformation increases political engagement, and about 7 in 10 say misinformation increases extreme political views and hate crimes such as violence motivated by race, gender, or religion [Associated Press]
In a study of how false news spreads on social media, researchers found that the top 1% of false news found its way to between 1,000 and 100,000 people, whereas the truth rarely reached more than 1,000 people [Science]
Great article! It’s definitely a tough issue to solve… how do we create markets for reliable information and fact-checking? The first figure suggests people know there’s lots of misinformation out there and they don’t really trust the media. The “low trust” and “low trustworthiness” equilibrium is hard to break out of. Consumers of media have short attention spans and only trust people in their tribe, and producers adapt to that. Consumers are shaped by the resulting media and it becomes a vicious cycle.
1. It’s basically “the market for lemons”, right?
2. Minor point, but I was confused by: “This creates an upward-sloping supply curve—the more misinformation gets shared, the more incentive there is to produce more of it.” This is wrong? Upward sloping supply curve means the marginal cost increases as the quantity increases. More misinformation leading to a lower marginal cost of producing more would be downward sloping supply, right?
The main point is just that the cost of producing and consuming misinformation is lower than the cost of producing and consuming reliable information. Our success depends on our individual and collective technologies for separating fact from fiction and our demand for these technologies. People need to be more virtuous and we need to develop credible signals of virtue in information markets. Not so easy!
Fact checking itself, for what I believe of most checkers, has simply become bias confirmation. It’s easy to spot by what those checkers don’t bother to check and how they will twist themselves into pretzels trying overcome an inconvenient fact that doesn’t support their preferred narrative. So, I’m not really confident the misinformation issue will ever be solved.