KIEP Opinions
PUBLISH
KIEP Opinions
Small Differences, Large Divisions: How Political Gaps Widen Over Time
- Author Hayun Song
- Series319
- Date2025-07-10

In November 2020, 81 million Americans voted for Joe Biden while 74 million voted for Donald Trump. Yet ask voters from each side what happened that night, and you will hear two irreconcilable stories. For Biden voters, democracy prevailed despite unprecedented attempts to undermine it. For many Trump voters, democracy was subverted through systematic fraud. Five years later, these parallel narratives have not converged—they have hardened into opposing religions.
According to the Pew Research Center, rural Americans have shifted from evenly split in 2008 to favoring Republicans by 25%p today. Meanwhile, urban voters mirror this exactly in reverse. But these numbers only hint at a deeper fracture. We are not just living in different places. We're living in different versions of reality. The question is not whether one is liberal or conservative anymore. It's which reality one inhabits.
The path to this point runs through three mechanisms. The usual suspects—algorithmic feeds, cable news bubbles, geographic clustering—are real but insufficient. They describe the landscape without explaining the earthquake that created it. What we need to understand is this mechanism: how does ordinary political disagreement mutate into mutual incomprehension?
Technology, psychology, and politics have created a feedback loop that keeps getting stronger. Small differences in opinion, when filtered through modern technology and reinforced by repetition, have widened into today's political chasms. These mechanics help explain why polarization accelerates rather than stabilizes over time.
The Mechanics of Division
The phenomenon is not new. Back in 2002, Rainer Hegselmann, a philosopher at the University of Bayreuth, and Ulrich Krause, a mathematician at the University of Bremen, were already mapping the mechanics of how opinions drift apart. Their insight was deceptively simple: people only listen to those whose views fall within their “confidence bound”—a range of acceptable disagreement. Beyond that threshold, communication simply ceases. Once erected, this wall becomes impermeable.
What Hegselmann and Krause couldn’t have foreseen was how technology would weaponize their discovery. Twenty years later, artificial intelligence has transformed their theoretical boundary into an algorithmic reality. New research by Daron Acemoglu and his MIT colleagues documents this transformation in painstaking detail. Their findings are sobering: social media platforms do not just observe these confidence boundaries—they actively construct and reinforce them.
The economics are brutally straightforward. When content has questionable reliability (and what political content does not these days?), platforms face a dilemma. Show it to everyone, and those who disagree will flag it as misinformation, killing its spread and the ad revenue it generates. The solution? Create what researchers call “filter bubbles”—closed loops where content circulates only among the already-converted. The math is compelling: restricted reach to believers generates more engagement than broad reach to skeptics. With Meta earning $131.9 billion in advertising revenue in 2023, every additional point of engagement translates to hundreds of millions in profit.
This might have remained an academic question if not for what Robert Axelrod discovered back in 1997. Working at the intersection of political science and complexity theory, Axelrod demonstrated that cultural traits—including political beliefs—spread through populations via local interactions. His model showed how local convergence can lead to global polarization: when people become more similar to their neighbors through repeated interactions, distinct cultural regions can emerge and persist. Axelrod's model shows how small initial differences compound over time into stable, separated groups that no longer interact—what starts as minor disagreement evolves into unbridgeable chasms.
The political response has been swift and cynical. Campaign strategists have abandoned the center—and for good reason. According to a Pew Research analysis, registered voters are now split almost down the middle: 49% Democratic, 48% Republican. More telling still, the moderate center has been hollowing out. Voters without a college degree, once evenly divided, now tilt Republican by 6%p. White voters without degrees have shifted even more dramatically—63% now align with the GOP, up from rough parity just 15 years ago. In this landscape of hardened partisan loyalties, why persuade when you can inflame? The strategy is simple: inflame your base rather than persuade the middle—it’s cheaper and more precise.
The synthesis of these forces creates a storm. Hegselmann and Krause showed us that people naturally limit their interactions—once someone’s views fall outside our “confidence zone,” we simply stop listening. Acemoglu revealed how AI exploits this tendency for profit, creating filter bubbles that keep us engaged but divided. Axelrod warned us how local agreement can paradoxically lead to global fragmentation. And political operatives have learned to ride this wave rather than fight it. Together, they paint a picture of a society not drifting apart by accident, but being systematically pulled apart by the very technologies we have embraced to connect us.
The timeline matters. These mechanisms have been operating for years now, compounding with each algorithm update, each quarterly earnings call demanding higher engagement. The moderate center—that crucial buffer that keeps democracies functional—has been shrinking steadily. While precise measurements vary, researchers across institutions agree on the trend: people are more politically segregated than at any point since modern polling began. Not through some grand conspiracy, but through the mundane logic of profit maximization and human psychology. By the time we notice the damage, the structural changes are already locked in.
Watching Democracy Unravel
What happens when these mechanisms operate together? To explore this question, we ran a thought experiment combining these dynamics—Hegselmann-Krause’s confidence bounds, where people ignore those too different from themselves; Acemoglu’s filter bubble economics, where algorithms amplify these boundaries for profit; and Axelrod’s cultural convergence patterns, where local agreement creates global divisions.
We began with 10,000 virtual citizens, their political opinions distributed normally around the center—a reasonable approximation of a healthy democracy. Each day, these citizens encountered information filtered through algorithmic lenses. Those on the left saw predominantly left-leaning content; those on the right, the opposite. The filter strengthened over time, mimicking how real platforms learn and optimize. Citizens updated their views based on what they saw, but only if the content fell within their confidence bounds. Otherwise, they simply ignored it.
Using parameters from the research literature—a 0.3 confidence threshold, 1% daily drift—we watched what emerged:

The progression is striking: Day 0 shows a classic bell curve—most people clustered around the center with natural variation to either side. By Day 10, you can see the center beginning to hollow out, though the change is subtle. Day 20 marks a critical transition: the single peak has become a plateau. By Day 30, we have two distinct peaks with a valley between them. The center has not just shrunk; it's been evacuated. The numbers tell the story starkly. Initially, 70% of the population fell within the moderate range (0.3 to 0.7 on our political spectrum). After 30 days of algorithmic filtering and social reinforcement, only 20% remained. The extremes—those beyond 0.8 on either end—grew from 10% to 45% of the population.
These simulations mirror real-world dynamics. It is a compressed version of what’s happening to democracies worldwide, played out at digital speed. In reality, the process takes years rather than days, but the mechanics are identical. Small differences, filtered through profit-maximizing algorithms and hardened by echo chambers, evolve into unbridgeable chasms.
The most unsettling aspect? The simulation assumes no bad actors, no deliberate manipulation, no conspiracy to divide us. Just platforms doing what they’re designed to do, politicians playing to win, and people responding predictably to these incentives. The catastrophe emerges from ordinary incentives operating at scale. We used to argue about politics over dinner tables and still pass the potatoes. Now we cannot even agree on basic facts. This is not because we've become worse people. It is because every algorithm, every campaign strategy, every business model rewards pushing us further apart. A democracy can survive disagreement. It cannot survive when we no longer speak the same language. 


Ph.D., Associate Research Fellow,
International Macroeconomics Team
File
-
Download
KIEP Opinions_no319_final.pdf
(356.31KB / Download 960회)
