Erin Taylor, Ph.D., Philosopher of Medical Ethics and AI

Associate Professor of Philosophy Washington and Lee University

I'll be at IASEAI'26 in Paris (Feb 24–26). If you're working on AI oversight, let's connect: taylore@wlu.edu

I work at the intersection of theoretical ethics, applied ethics, and AI safety. My research focuses on how abstract moral principles get translated into concrete action-guidance, and what happens when they can't. This "specification problem" appears across domains: in professional and institutional ethics, in medical and research contexts, and increasingly in the design of AI oversight systems.

My current work focuses on human-in-the-loop (HITL) approaches to AI ethics oversight. Oversight institutions rely on proxies: consent forms stand in for genuine voluntariness, risk categories stand in for actual harm. As AI systems grow more sophisticated, they can legitimately question whether these proxies track what they're supposed to track. I call this proxy destabilization, and I think it represents a structural vulnerability in proxy-based oversight. I presented this work at the National Institutes of Health in June 2025. The accompanying paper is R&R at Ethics and Human Research.

Before turning to AI safety, I published on convention-based moral theory, the nature of role obligations, and procedural frameworks for navigating ethical complexity in medical contexts. My work has appeared in Australasian Journal of Philosophy, American Philosophical Quarterly, Health Care Analysis, and Oxford University Press.