Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 May 15;11(5):230859.
doi: 10.1098/rsos.230859. eCollection 2024 May.

Towards algorithm auditing: managing legal, ethical and technological risks of AI, ML and associated algorithms

Affiliations

Towards algorithm auditing: managing legal, ethical and technological risks of AI, ML and associated algorithms

Adriano Koshiyama et al. R Soc Open Sci. .

Abstract

-Business reliance on algorithms is becoming ubiquitous, and companies are increasingly concerned about their algorithms causing major financial or reputational damage. High-profile cases include Google's AI algorithm for photo classification mistakenly labelling a black couple as gorillas in 2015 (Gebru 2020 In The Oxford handbook of ethics of AI, pp. 251-269), Microsoft's AI chatbot Tay that spread racist, sexist and antisemitic speech on Twitter (now X) (Wolf et al. 2017 ACM Sigcas Comput. Soc. 47, 54-64 (doi:10.1145/3144592.3144598)), and Amazon's AI recruiting tool being scrapped after showing bias against women. In response, governments are legislating and imposing bans, regulators fining companies and the judiciary discussing potentially making algorithms artificial 'persons' in law. As with financial audits, governments, business and society will require algorithm audits; formal assurance that algorithms are legal, ethical and safe. A new industry is envisaged: Auditing and Assurance of Algorithms (cf. data privacy), with the remit to professionalize and industrialize AI, ML and associated algorithms. The stakeholders range from those working on policy/regulation to industry practitioners and developers. We also anticipate the nature and scope of the auditing levels and framework presented will inform those interested in systems of governance and compliance with regulation/standards. Our goal in this article is to survey the key areas necessary to perform auditing and assurance and instigate the debate in this novel area of research and practice.

Keywords: artificial intelligence; auditing; bias; explainability; machine learning; transparency.

PubMed Disclaimer

Conflict of interest statement

We declare we have no competing interests.

Figures

Dimensions and examples of activities that are part of algorithm auditing.
Figure 1.
Dimensions and examples of activities that are part of algorithm auditing.
Main learning paradigms of machine learning.
Figure 2.
Main learning paradigms of machine learning.
Traditional ML versus transfer learning versus meta-learning.
Figure 3.
Traditional ML versus transfer learning versus meta-learning.
Types and levels of algorithm explainability.
Figure 4.
Types and levels of algorithm explainability.
UK’s Information Commissioner’s Office has a colour-coded ‘Assurance Rating’ for data.
Figure 5.
UK’s Information Commissioner’s Office has a colour-coded ‘Assurance Rating’ for data. Available at: https://ico.org.uk/for-organisations/audits/.
The overlaps between algorithm robustness, fairness, explainability and privacy.
Figure 6.
The overlaps between algorithm robustness, fairness, explainability and privacy.
Algorithm selection trade-offs: model-specific interpretability versus accuracy.
Figure 7.
Algorithm selection trade-offs: model-specific interpretability versus accuracy.
Algorithm selection trade-offs: bias (statistical parity) versus performance (accuracy).
Figure 8.
Algorithm selection trade-offs: bias (statistical parity) versus performance (accuracy).
Algorithm selection trade-offs: explainability (feature importance) versus privacy (data minimization).
Figure 9.
Algorithm selection trade-offs: explainability (feature importance) versus privacy (data minimization).
(a) Feature importance chart with the breakdown per male and female groups.
Figure 10.
(a) Feature importance chart with the breakdown per male and female groups. (b) Explainability (feature importance) based on a bias (disparate impact) metric.
Interaction between all verticals.
Figure 11.
Interaction between all verticals. The values displayed were estimated by permutation importance using accuracy and average odds difference as loss metrics.
Information concealed versus feedback detail trade-off curve.
Figure 12.
Information concealed versus feedback detail trade-off curve.
Feedback loop: from model development, assessment and mitigation, to redevelopment, reassessment and re-mitigation.
Figure 13.
Feedback loop: from model development, assessment and mitigation, to redevelopment, reassessment and re-mitigation.
Diagram outlining the steps towards assurance: combining governance and impact assessments with audit and technical assessment
Figure 14.
Diagram outlining the steps towards assurance: combining governance and impact assessments with audit and technical assessment; finding equivalent standards and regulations in the sector/end-application; generating a document/audit trail that will feed into certification and insurance as part of assurance.

References

    1. Brundage M, et al. . 2020. Toward trustworthy AI development: mechanisms for supporting verifiable claims. arXiv Preprint (10.48550/arXiv.2004.07213) - DOI
    1. Treleaven P, Barnett J, Koshiyama A. 2019. Algorithms: law and regulation. Computer 52 , 32–40. (10.1109/MC.2018.2888774) - DOI
    1. Anuradha J. 2015. A brief introduction on Big Data 5Vs characteristics and Hadoop technology. Procedia Comput. Sci. 48 , 319–324. (10.1016/j.procs.2015.04.188) - DOI
    1. Hill RK. 2016. What an algorithm is. Philos. Technol. 29 , 35–59. (10.1007/s13347-014-0184-5) - DOI
    1. Giarratano JC, Riley G. 1998. Expert systems: principles and programming, 3rd edn. Boston, MA: PWS Publishing Co.

LinkOut - more resources