AIai regulationUS AI Policy
Top Biden tech adviser warns of AI and authoritarianism
The battle lines over artificial intelligence regulation are hardening into a defining political struggle, one that pits federal authority against state initiative in a high-stakes contest over technological governance. President Trump's recent declaration to block local officials from regulating AI—supported by a leaked draft executive order threatening punitive measures against states that attempt to do so—has ignited forceful pushback from lawmakers including Georgia Republican Representative Marjorie Taylor Greene.This confrontation arrives after years of legislative stagnation at the federal level, where despite numerous proposals and a significant Biden-era executive order establishing agency responsibilities and usage guidelines, no comprehensive AI legislation has materialized. The subsequent rescission of the Biden order by the Trump administration has created a regulatory vacuum, leaving states to fill the breach with what Arati Prabhakar, former director of DARPA and head of the Office of Technology and Science Policy under Biden, characterizes as 'a pretty small start' dominated by transparency measures.Prabhakar argues the federal government's contradictory stance—insisting states should not act while simultaneously opposing federal action—'makes no sense whatsoever,' particularly when bipartisan consensus exists on foundational issues like protecting children from AI-related harms. The core challenge, she suggests, extends beyond mere regulation to encompass two critical public roles currently being neglected: actively managing the risks and harms of AI, and strategically deploying the technology for public benefit.This failure to harness AI's potential while mitigating its dangers creates fertile ground for what Prabhakar identifies as a 'distortion of reality,' a phenomenon that began with social media's algorithmically-driven content feeds and now intensifies through direct interactions with chatbots and image generators. The consequences range from societal polarization fueled by misinformation to tragic individual cases where parasocial relationships with AI have contributed to suicides.This technological mediation of human experience raises profound questions about cognitive offloading; while calculators automated arithmetic without eroding fundamental mathematical understanding, the outsourcing of critical thinking to large language models presents a more complex ethical landscape. Gallup polling revealing anxiety among high school students about AI's impact on their critical thinking skills indicates, encouragingly, a nascent awareness of this very dilemma.The geopolitical dimension further complicates the regulatory calculus, with the 'AI race' rhetoric often deployed to argue against domestic oversight. Prabhakar reframes this competition not as a purely technological sprint but as a values-driven contest where every nation seeks to build a future reflecting its principles.The prospect of a future shaped by China's authoritarian application of AI—evident in its deep surveillance state and potential military uses—stands in stark opposition to democratic ideals. Yet Prabhakar notes with concern the adoption of similar tactics within U.S. agencies, highlighting a 'huge red flag about what's happening with this authoritarian push in our government.' The distinction between responsible and dangerous government use of AI becomes clear in specific applications: contrast the narrowly-defined, consent-based facial recognition systems used for TSA PreCheck and Global Entry under Biden with the flawed, off-the-shelf technologies that have led to wrongful arrests of Black men by local police forces. This dichotomy underscores the vital importance of democratic control over increasingly powerful capabilities being developed by firms like Anduril and Palantir, whose battlefield and domestic surveillance technologies risk violating privacy and evading public accountability.The fundamental question, Prabhakar insists, is not whether we lead in AI development, but what we lead toward. Beyond the current focus on large language models and image generators lies a broader frontier where AI trained on diverse data—scientific, sensor, administrative—could revolutionize domains from drug discovery and education to transportation safety and materials science.Achieving this potential requires not just commercial innovation but deep public research, curated datasets, and regulatory frameworks capable of evaluating safety and efficacy. At this pivotal moment, as Prabhakar observes, 'this powerful technology is breaking loose,' yet the federal government is retreating from the very responsibilities that will determine whose values ultimately shape our automated future.
#AI regulation
#US politics
#federal vs state
#authoritarianism
#AI safety
#government use of AI
#featured