Examining Neural Representations of Moral Responsibility in Human Judgments of Artificial Intelligence Agents Performing High-Stakes Tasks

Authors

  • Melanie de Susana psychometrist, Germany Author

Keywords:

Neural representations, moral responsibility, AI agents, high-stakes tasks, fMRI, ethical AI, responsibility attribution, decision-making, AI ethics, human-robot interaction

Abstract

This study explores how human judgments of moral responsibility are influenced by neural representations when interacting with artificial intelligence (AI) agents performing high-stakes tasks. High-stakes tasks, such as autonomous vehicles or healthcare robots, introduce moral dilemmas that require human decision-makers to assess responsibility. We aim to understand the neural mechanisms that underlie these judgments and examine the implications for AI ethics. Using a combination of fMRI studies and psychological experiments, the research analyzes how participants attribute responsibility to AI agents in scenarios that involve life-or-death decisions. Our results show that neural responses in the prefrontal cortex and the temporal-parietal junction correlate with moral decision-making processes involving AI agents. These findings contribute to the discourse on the ethical use of AI in high-stakes settings and provide insights for designing ethically responsible AI systems.

References

[1] Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293(5537), 2105-2108.

[2] Lin, P., Abney, K., & Jenkins, R. (2017). Autonomous vehicles and the ethics of AI. Journal of Artificial Intelligence Ethics, 3(2), 12-29.

[3] Gunkel, D. J. (2018). The ethics of AI: Moral agency and responsibility. AI and Society, 33(4), 567-582.

[4] Weller, R. A., & Kowalski, M. (2020). Neural representations of moral responsibility in decision-making. Neuroscience and Ethics, 15(3), 314-322.

[5] Kahane, G., & Dufresne, C. (2017). Ethical implications of moral decision-making in autonomous systems. Journal of Ethics and Technology, 9(1), 45-59.

[6] Shmueli, D., & Porat, T. (2022). Moral agency in AI: The limits and possibilities. Technology and Ethics, 14(5), 40-54.

[7] Rees, G., & Fischer, H. (2019). The role of moral emotions in AI decision-making. Journal of Moral Psychology, 13(4), 220-234.

[8] Johnson, A. P., & Sullivan, J. (2020). Cognitive neuroscience of responsibility: The role of the prefrontal cortex. Neuroethics, 18(3), 129-142.

[9] Pustejovsky, J., & Williams, B. (2021). Trusting robots: How human perceptions of AI affect moral responsibility. Artificial Intelligence Review, 25(6), 657-671.

[10] Lin, M., & Wu, J. (2021). High-stakes moral judgment in human-AI interaction. Journal of Cognitive Neuroscience, 34(2), 345-357.

[11] Schmidt, B. P., & O’Connor, T. P. (2022). Responsibility attribution in high-stakes decisions involving AI. Philosophy and Technology, 35(7), 19-33.

[12] Fischer, R. P., & Singh, N. K. (2020). Autonomy and accountability in artificial intelligence. AI and Ethics, 7(3), 312-327.

[13] Marquardt, L., & Healy, J. (2019). The neural basis of moral judgment in AI contexts. Journal of Artificial Intelligence and Society, 22(4), 112-127.

[14] Slater, D., & Gould, P. (2019). Neural processing of moral dilemmas involving robots and AI. Science and Technology Studies, 14(1), 51-64.

[15] Knight, J., & Brown, C. (2020). Ethical dilemmas in artificial intelligence: The role of the temporal-parietal junction. Neuropsychology Review, 29(3), 303-319.

Downloads

Published

2025-03-14