Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2018 Sep 13;376(2128):20170360.
doi: 10.1098/rsta.2017.0360.

How should we regulate artificial intelligence?

Affiliations
Review

How should we regulate artificial intelligence?

Chris Reed. Philos Trans A Math Phys Eng Sci. .

Abstract

Using artificial intelligence (AI) technology to replace human decision-making will inevitably create new risks whose consequences are unforeseeable. This naturally leads to calls for regulation, but I argue that it is too early to attempt a general system of AI regulation. Instead, we should work incrementally within the existing legal and regulatory schemes which allocate responsibility, and therefore liability, to persons. Where AI clearly creates risks which current law and regulation cannot deal with adequately, then new regulation will be needed. But in most cases, the current system can work effectively if the producers of AI technology can provide sufficient transparency in explaining how AI decisions are made. Transparency ex post can often be achieved through retrospective analysis of the technology's operations, and will be sufficient if the main goal is to compensate victims of incorrect decisions. Ex ante transparency is more challenging, and can limit the use of some AI technologies such as neural networks. It should only be demanded by regulation where the AI presents risks to fundamental rights, or where society needs reassuring that the technology can safely be used. Masterly inactivity in regulation is likely to achieve a better long-term solution than a rush to regulate in ignorance.This article is part of a discussion meeting issue 'The growing ubiquity of algorithms in society: implications, impacts and innovations'.

Keywords: artificial intelligence; law; machine learning; regulation; transparency.

PubMed Disclaimer

Conflict of interest statement

I declare I have no competing interests.

References

    1. Greenemeier L.2016. Deadly Tesla crash exposes confusion over automated driving. Scientific American 8 July. See http://www.scientificamerican.com/article/deadly-tesla-crash-exposes-con... .
    1. Tesla Motors statement. 2016. 30 June 2016. See https://www.teslamotors.com/en_GB/blog/tragic-loss.
    1. Al-shamasneh ARM, Obaidellah UHB. 2017. Artificial intelligence techniques for cancer detection and classification: review study. Eur. Sci. J. 13, 342–370. (10.19044/esj.2016.v13n3p342) - DOI
    1. European Parliament Committee on Legal Affairs. 2017. REPORT with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)) A8-0005/2017 27 January.
    1. Reed C. 2010. How to make bad law: lessons from cyberspace. Mod. Law Rev. 73, 903–932. (10.1111/j.1468-2230.2010.00838.x) - DOI

LinkOut - more resources