AIai safety & ethicsMisinformation and Deepfakes
The real AI threat is algorithms that ‘enrage to engage’
While media personalities and tech CEOs dominate headlines with finger-pointing and doomsday prophecies about artificial intelligence, they're skillfully diverting attention from the true existential crisis unfolding in plain sight: the algorithmic machinery systematically eroding our social fabric. This isn't about a future AI apocalypse; it's about the present-day design of engagement algorithms that monetize outrage and accelerate radicalization.The business model is brutally simple: enrage to engage. Platforms like Facebook, YouTube, and the deliberately opaque TikTok prioritize content that triggers the strongest emotional reactions, creating a digital ecosystem where the most hysterical voices are amplified into mainstream consciousness.This isn't an accidental byproduct; it's the core engine of growth. The consequences are terrifyingly tangible.We've witnessed a sharp rise in what the FBI now terms 'nihilistic violent extremism'—violence driven less by coherent ideology than by alienation, performative rage, and a desperate quest for social status in online tribes. The tragic killings of figures like Charlie Kirk and the brutal attacks on elected officials are not isolated incidents but likely symptoms of a system that efficiently funners disillusioned individuals, particularly young men, from a sense of uselessness towards violent action.As author Jonathan Haidt notes, the share of adolescents who feel their lives are 'useless' has more than doubled since 2012, a statistic that should alarm everyone. Algorithms exploit this vulnerability, creating parallel realities where a yoga mat search tags you as a liberal, flooding your feed with climate news, while a truck search funnels your neighbor into a world of anti-regulation commentary.These customized realities don't just harden beliefs; they make comprehension of the 'other' nearly impossible. The legal system is woefully unequipped to police this digital landscape.Convictions, as in the 2018 Tree of Life synagogue shooting, can take years, while new permission structures for violence take root in days. Meanwhile, the very tech leaders sounding alarms about AGI—Sam Altman with his 'lights out for all of us' warning, Elon Musk forecasting human obsolescence—are unintentionally fueling the fire.Data from intelligence firm Narravance reveals that after consuming dystopian narratives about AI-driven job loss, a shocking 17. 5% of U.S. adults found violence against Musk justified, a number that spiked to nearly 32% in early 2025.This isn't a coincidence; it's a direct outcome of apocalyptic rhetoric being stripped of nuance, meme-ified, and algorithmically distributed as gospel in communities hungry for purpose. The corporate world is reacting, with 44% of global companies now actively monitoring social media, the deep web, and dark web for threats, and two-thirds increasing physical security budgets.But hardening perimeters and hiring armed escorts, as Allied Universal's Glen Kucera confirms is happening, is a defensive tactic against a systemic offensive. The real threat, as computer scientist Joseph Weizenbaum presciently warned in 1975, isn't sentient machines; it's our surrender to systems engineered to keep us engaged, enraged, and endlessly divided. The apocalypse won't arrive via killer robots, but through the steady, algorithmic erosion of shared reality and human judgment itself, leaving us perpetually on the brink.
#social media algorithms
#online radicalization
#political violence
#AI apocalypse
#enrage to engage
#tech CEOs
#lead focus news