Against Moral Deference: Why Artificial Moral Agents Need Not Undermine Phronesis

Authors

  • Jinglong Yang

Abstract

Despite the numerous threats posed by rapid AI development, many could be overcome through technological progress. This raises an important normative question: if AI were to achieve technical perfection, would there be anything left to worry about? Aristotelian scholars Nir Eisikovits and Dan Feldman identify one such concern: the erosion of phronesis. This paper examines whether Artificial Moral Agents (AMAs)—a particular type of advanced AI—erode phronesis, and argues that, under the right accountability structure, AMAs can help cultivate rather than undermine phronesis.

Drawing on Aristotle's Nicomachean Ethics, I assess whether AMAs could qualify as responsible agents using the control and epistemic conditions of responsibility. I contend that sufficiently advanced AMAs could meet the control condition insofar as they exercise autonomous, context-sensitive deliberation over means and practical ends. However, even sophisticated AMAs would fail the epistemic condition, which requires awareness of morally salient particulars through moral perception. As a result, AMAs are not fitting targets for moral praise and blame under Aristotelian accountability structure.

Furthermore, they are not fitting bearers of responsibility according to contemporary accountability mechanisms either. This limitation has a practical implication: since AMAs cannot bear responsibility, attempts to offload moral authority onto them ultimately fails, giving humans decisive incentive to retain final moral authority. With appropriate collaboration structure, AMAs can facilitate the cultivation of phronesis by automating taxing lower-order tasks, freeing attention for more rewarding higher-order reasoning, and creating learning opportunities whenever human and AMA judgments diverge. By reserving moral responsibility within us, we
create conditions in which phronesis can flourish rather than atrophy.

Author Biography

Jinglong Yang

Jinglong Yang is a fourth-year undergraduate student at Vanderbilt University, working at the intersection of ethics, moral philosophy, metaphysics, and the philosophy of AI. His research is currently focused on examining whether AI systems could genuinely possess normative competence—the ability to recognize and act on the practical reasons that apply to one’s action—and what such a possibility would mean for trust, agency, and moral responsibility. In an age of rapid technological advancement, philosophy is urgently called for to guide and check the increasing power of science. His research is an endeavor in this direction. His broader philosophical interests include Spinoza, Aristotle, Iris Murdoch, and Stoicism.

Downloads

Published

2026-04-23

How to Cite

Yang, Jinglong. 2026. “Against Moral Deference: Why Artificial Moral Agents Need Not Undermine Phronesis”. Dianoia: The Undergraduate Philosophy Journal of Boston College, April, 104-24. https://ejournals.bc.edu/index.php/dianoia/article/view/21676.

Issue

Section

Articles