Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 May 26;29(3):21.
doi: 10.1007/s11948-023-00443-3.

Reflections on Putting AI Ethics into Practice: How Three AI Ethics Approaches Conceptualize Theory and Practice

Affiliations

Reflections on Putting AI Ethics into Practice: How Three AI Ethics Approaches Conceptualize Theory and Practice

Hannah Bleher et al. Sci Eng Ethics. .

Abstract

Critics currently argue that applied ethics approaches to artificial intelligence (AI) are too principles-oriented and entail a theory-practice gap. Several applied ethical approaches try to prevent such a gap by conceptually translating ethical theory into practice. In this article, we explore how the currently most prominent approaches of AI ethics translate ethics into practice. Therefore, we examine three approaches to applied AI ethics: the embedded ethics approach, the ethically aligned approach, and the Value Sensitive Design (VSD) approach. We analyze each of these three approaches by asking how they understand and conceptualize theory and practice. We outline the conceptual strengths as well as their shortcomings: an embedded ethics approach is context-oriented but risks being biased by it; ethically aligned approaches are principles-oriented but lack justification theories to deal with trade-offs between competing principles; and the interdisciplinary Value Sensitive Design approach is based on stakeholder values but needs linkage to political, legal, or social governance aspects. Against this background, we develop a meta-framework for applied AI ethics conceptions with three dimensions. Based on critical theory, we suggest these dimensions as starting points to critically reflect on the conceptualization of theory and practice. We claim, first, that the inclusion of the dimension of affects and emotions in the ethical decision-making process stimulates reflections on vulnerabilities, experiences of disregard, and marginalization already within the AI development process. Second, we derive from our analysis that considering the dimension of justifying normative background theories provides both standards and criteria as well as guidance for prioritizing or evaluating competing principles in cases of conflict. Third, we argue that reflecting the governance dimension in ethical decision-making is an important factor to reveal power structures as well as to realize ethical AI and its application because this dimension seeks to combine social, legal, technical, and political concerns. This meta-framework can thus serve as a reflective tool for understanding, mapping, and assessing the theory-practice conceptualizations within AI ethics approaches to address and overcome their blind spots.

Keywords: Aligned ethics; Critical theory; Embedded ethics; Ethics by design; Theory–practice gap; Value sensitive design.

PubMed Disclaimer

Conflict of interest statement

The authors declare no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Figures

Fig. 1
Fig. 1
Meta-Framework for Applied AI Ethics Approaches and its Three Conceptual Dimensions The first dimension of reflection focuses on affects/emotions, the second dimension on justifications, and the third dimension asks about governance aspects. These dimensions are connected to the guiding elements that drive the three analyzed approaches: intuitions, principles, and deliberation

Similar articles

Cited by

References

    1. Beijing Academy of Artifical Intelligence. (2019). Beijing AI principles. https://www.baai.ac.cn/blog/beijing-ai-principles.
    1. Bleher, H., & Braun, M. (2022). Diffused responsibility: Attributions of responsibility in the use of AI-driven clinical decision support systems. AI and Ethics. 10.1007/s43681-022-00135-x - PMC - PubMed
    1. Bogina, V., Hartman, A., Kuflik, T., & Shulner-Tal, A. (2022). Educating software and AI stakeholders about algorithmic fairness, accountability, transparency and ethics. International Journal of Artificial Intelligence in Education,32(3), 808–833. 10.1007/s40593-021-00248-0
    1. Borning, A., & Muller, M. (2012). Next steps for value sensitive design. In Proceedings of the SIGCHI conference on human factors in computing systems, (pp. 1125–1134). 10.1145/2207676.2208560
    1. Braun, M. (2020). Vulnerable life: Reflections on the relationship between theological and philosophical ethics. The American Journal of Bioethics,20(12), 21–23. 10.1080/15265161.2020.1832615 - PubMed

Publication types

LinkOut - more resources