Can AI Avoid the Enshittification Trap?2 days ago7 min read0 comments

Cory Doctorow’s evocative theory of 'enshittification' provides a disturbingly accurate lens through which to view the lifecycle of digital platforms, and it now casts a long, ominous shadow over the burgeoning field of artificial intelligence. The pattern is a familiar, almost predictable rot: a platform, be it a social network or a search engine, starts by generously empowering its users, creating a vibrant ecosystem.Then, it subtly shifts to favor its business customers, the advertisers and partners who fuel its revenue. Finally, in its terminal stage, it aggressively extracts value from everyone—users, customers, even its own ecosystem—until the entire structure becomes a hollowed-out, unusable shell of its former self, a digital ghost town.We've witnessed this tragic arc with giants like Facebook and Amazon, and the question that now hangs in the air, thick with the scent of venture capital and processing power, is whether AI is destined to follow the same enshittified path. The fundamental architecture of many major AI systems, particularly the large language models controlled by a handful of tech behemoths, creates a perfect petri dish for this decay.These models are often closed, proprietary black boxes, their inner workings guarded as crown jewels. This centralization of power means the entities controlling the foundational models can, at their whim, alter access, change pricing structures, and manipulate the very fabric of the digital environment they've created, much like a social media company changing its algorithm to maximize engagement at the cost of truth or a marketplace suddenly favoring its own private-label products over third-party sellers.The initial, 'magical' phase of AI, where tools like ChatGPT felt like democratized access to godlike intelligence, could easily give way to a second phase where the AI's outputs are subtly tuned to favor corporate partners—imagine a model that, when asked for product recommendations, consistently highlights brands that have paid for placement, or one that generates code that defaults to using a specific, paid cloud service. The final, extractive stage would see the model's capabilities intentionally degraded for free users to push subscriptions, its API costs jacked up to squeeze startups, and its core functionality so bogged down with commercial imperatives that its original utility is strangled.This isn't merely speculative; we can see the early warning signs in the debates around data sourcing, where the very fuel for these models is scraped from the open web without compensation, a form of pre-emptive extraction from the global commons. The parallel to Asimov's Three Laws of Robotics is stark; we are in urgent need of a similar foundational ethics for AI governance, not coded into positronic brains, but baked into business models and regulatory frameworks.Proponents of open-source AI offer a potential antidote, arguing that transparent, community-developed models can prevent the kind of centralized control that enables enshittification. Yet, even this path is fraught, as the immense computational costs of training state-of-the-art models naturally create high barriers to entry, potentially leaving only tech giants and well-funded governments with the resources to compete.The regulatory landscape, currently scrambling to catch up, will be a decisive battleground. Will we see rules that enforce interoperability and data portability, allowing users to migrate their AI 'personas' and histories between platforms, thus creating competitive pressure? Or will regulation solidify the dominance of incumbents, creating a captured ecosystem? The stakes could not be higher.If AI succumbs to this trap, it won't just be another degraded app on our phones; it risks becoming the central nervous system of our economy and society, a system that is inefficient, untrustworthy, and ultimately hostile to human flourishing. The challenge, then, is not just to build smarter AI, but to architect it within systems that are inherently resistant to the corrosive logic of enshittification, ensuring this powerful technology serves humanity rather than merely monetizing it.