AIai safety & ethicsResponsible AI
Will AI make research on humans less human?
The question of whether AI will make research on humans less human is not some distant philosophical exercise; it is a pressing ethical dilemma unfolding in labs and boardrooms today. For decades, the field of human subjects research (HSR) has operated under a framework born from historical atroc—the Tuskegee Syphilis Study, the horrors of Nazi experimentation.The resulting ethical guardrails, crystallized in the 1979 Belmont Report and enforced by Institutional Review Boards (IRBs), were built for a world where research meant direct interaction with a living person. That world is gone.As AI ethicist and IRB expert Tamiko Eto explains, the paradigm has fundamentally inverted. We once studied individuals to infer truths about populations.Now, AI systems hoover up vast, population-level datasets—our medical records, our online behaviors, our biometric signatures—to make consequential predictions about individuals, often without their knowledge or meaningful consent. This shift from human subjects to ‘human data subjects’ exposes a regulatory chasm.Our definitions of identifiability, anchored in laws like HIPAA, are laughably outdated in an age where an MRI scan or a Fitbit’s step count can be a unique fingerprint. The promise of AI in research is undeniable: earlier disease detection, personalized treatment models, and the acceleration of scientific discovery.But this promise is contingent on responsible development, a condition that is frequently unmet. The immediate risks are the ‘black box’ nature of algorithmic decision-making and the erosion of privacy in a nation with scant data ownership rights.More insidiously, Eto points to the specter of the ‘digital twin’—a comprehensive virtual model of a person, constructed from disparate data streams, that can predict your health outcomes or mimic your voice, existing in a legal limbo where you have no claim to it. The long-term societal risks are where the old frameworks fail most catastrophically.IRBs are designed to assess risk to the individual participant, not to contemplate systemic discrimination or the embedding of historical biases into the architecture of daily life. The data used to train these powerful tools is often scraped from marginalized communities without consent, yet the refined, deployed systems disproportionately harm those same groups.This isn't just a technical failure; it's an ethical one, a violation of the Belmont principle of justice. The path forward demands more than AI literacy for regulators; it requires a wholesale reimagining of oversight for a cyclical, data-driven research model.
#AI regulation
#human subjects research
#Institutional Review Board
#data privacy
#ethics
#digital twin
#featured