The 2024 US presidential election created ideal conditions for studying algorithmic polarization. Divisive viral content proliferated, fake images reached tens of millions of viewers, and political emotions ran high—providing researchers with a natural environment where platform effects might be particularly strong and therefore more easily measured and understood.
The campaign featured numerous examples of the divisive content researchers studied: fabricated images damaging candidates’ reputations, AI-generated propaganda presenting false narratives, and authentic but inflammatory posts from political actors and their supporters. This ecosystem of divisive content meant that researchers could study how algorithms affect exposure to realistic rather than artificial stimuli.
Over 1,000 users participated during this heated period, unknowingly receiving modified feeds that increased or decreased divisive content. The high-stakes political environment meant participants were likely paying close attention to political content, potentially making them more susceptible to polarization effects than they might be during quieter political periods.
Results showed dramatic effects despite this already-inflammatory baseline. Even in an environment saturated with divisive political content, subtle algorithmic adjustments produced measurable polarization shifts equivalent to three years of natural change. This suggests that platform effects remain powerful even when users are actively engaged with politics and presumably more aware of potential manipulation.
The election context also makes findings particularly policy-relevant. Democratic societies have special interests in protecting electoral integrity and ensuring that campaigns occur on relatively level playing fields. If platform algorithms can significantly shift political attitudes during elections, this represents a form of influence that might warrant special oversight or regulation to protect democratic processes from invisible algorithmic manipulation.
