top of page
Search

Swipe, Like, Radicalise: The Algorithmic Descent into Political Madness

  • Writer: Vicenta Wheatley
    Vicenta Wheatley
  • Apr 15
  • 5 min read

Updated: Apr 15


Source: Caitlin Leahy ‘25
Source: Caitlin Leahy ‘25

Open your phone. Scroll for a couple of minutes. Now ask yourself: do you feel more informed - or just more upset?


Welcome to the algorithm’s favourite game: keep you hooked, keep you outraged, and quietly radicalise you while you think you’re just ‘staying up to date’. From infinite swipe-happy dopamine hits at the edge of your fingertips, to rage bait politics served with a side of merch and dubious paid courses on how to “escape the matrix”, in 2025 we’re now living in an online funhouse where the loudest, angriest (and not always correct) voices win - and calm, reasonable conversation gets drowned out by the algorithm’s thirst for chaos and clicks.

So, how did we get here? And why does it feel like it’s getting worse?


Let’s start with the algorithmic problem. Social media apps like TikTok, Instagram, etc. have crafted their algorithm to prioritise engagement. Makes sense - more user interaction means more time spent on the app, which means more ad revenue, and so higher profits. But unfortunately, engagement doesn’t always come from calm or complexity. Research has shown that people engage more when they’re emotionally charged. That’s not just a feeling - it’s a measurable pattern. A 2016 study found that “anger is more contagious than joy”, and that it can spark more angry follow-up behaviour. In other words, anger doesn’t just spread - it escalates. And the platforms are paying attention. Yale’s William Brady found that moral outrage is particularly powerful on social media sites because of the way platforms reward it. Their incentives are essentially changing the tone and way our political conversations are being conducted, so what was once an honest debate now often feels like a shouting match - because shouting gets the clicks. Not surprisingly, this creates a vicious cycle. The more anger fuels user engagement, the more the algorithm promotes content that elicits anger. Once users realise this, they start creating content designed to provoke (also known as ‘ragebait’) - rewarding outrage and often punishing nuance.


Not only does the algorithm tend to present us with more emotionally charged content - it also presents us with content that we already agree with, and so are likely to engage with. It’s made to keep us scrolling, but it also traps us in what author and activist Eli Pariser famously dubbed a “filter bubble” - an echo chamber where opposing opinions and perspectives get quietly filtered out while we’re left doom scrolling through a personalised loop of our own biases. This phenomena holds serious political and societal repercussions, and the scale of the problem is worldwide. In its Global Risks Report, The World Economic Forum warned of the rise of “digital wildfires” - the rapid, unchecked spread of polarising content online that often leads to increases in misinformation - labelling it as a serious threat to global stability. Moreover, according to researcher Dr Sachin Modgil, this kind of algorithmic curation erodes common ground, which has been tied to social and political gridlock that we’re now seeing play out globally. Whilst personalised feeds may feel more convenient, the long-term result is a population more resistant to compromise, more certain of its moral superiority, and less curious. Some users are catching on. Since they know that shared outrage performs better than nuance, polarisation has become not just a by-product of the algorithm, but a business model.



Source: NSW Government
Source: NSW Government

Ever seen an ad for a course on “how to escape the matrix”? No? How about a podcast clip claiming to reveal “what they don’t want you to know”, or a creator selling access to their exclusive “truth community”? Still no? I envy you. In today’s outrage economy, controversy isn’t just content - it’s currency. The fusion of social and economic activities on digital platforms is a recent development, and though it does generate positive effects, like allowing greater access to entrepreneurship by offering alternative pathways to economic empowerment and lowering barriers to entry (Ide et al., 2024), there exist a portion of influencers and the like who have jumped on the fact that our division, our outrage, can be monetised. How do they do this? They provoke, then offer a shared sense of frustration and mutual outrage, and finally, sell some sort of product that stands as a symbol of resistance to whatever cause or event seems to be provoking you the most. Manipulating you into thinking you’re enacting some sort of justice, or sticking it to them! (whoever ‘them’ is). From Andrew Tate, who after riling viewers up with combative soundbites about the unjust ‘system’ pushes hyper-masculine hustler ideologies toward paid programs like “Hustler’s University” (yes, that’s a real thing) - marketed as a way to escape the ‘matrix’, to streamers like Hasan Piker who - while not as overtly commercial as Tate - thrives on performative outrage and has been criticised for blurring the lines between genuine activism and profit-making, these creators know what sells: division, anger, and the sense that you’re fighting back against some enemy. With the way things are currently set up, the monetisation of controversy will continue to ensure divisive content remains profitable and widespread.


Social media algorithms have already learnt what keeps us riled up, and they’re only getting better at serving it to us. With echo chambers trapping us in our own biases and influences cashing in on the chaos, it’s clear outrage is the product. Is there any way of fighting this? Maybe - but it starts with awareness, and trying to resist the urge to get sucked into social media’s political madness. But at a broader level, real change may only happen if pressure is placed on platforms to rethink the incentives they’ve built. For now,  go touch some much-needed grass. 


 

References:

 
 
 

Comments


bottom of page