Abstract
The range of application of artificial intelligence (AI) is vast, as is the potential for harm. Growing awareness of potential risks from AI systems has spurred action to address those risks while eroding confidence in AI systems and the organizations that develop them. A 2019 study (1) found more than 80 organizations that have published and adopted “AI ethics principles,” and more have joined since. But the principles often leave a gap between the “what” and the “how” of trustworthy AI development. Such gaps have enabled questionable or ethically dubious behavior, which casts doubts on the trustworthiness of specific organizations, and the field more broadly. There is thus an urgent need for concrete methods that both enable AI developers to prevent harm and allow them to demonstrate their trustworthiness through verifiable behavior. Below, we explore mechanisms [drawn from (2)] for creating an ecosystem where AI developers can earn trust—if they are trustworthy (see the figure). Better assessment of developer trustworthiness could inform user choice, employee actions, investment decisions, legal recourse, and emerging governance regimes.
References and Notes
1 A. Jobin, M. Ienca, E. Vayena, Nat. Mach. Intell. 1, 389 (2019).
CROSSREF | GOOGLE SCHOLAR
2 M. Brundage et al., arXiv:2004.07213 (2020).
GOOGLE SCHOLAR
3 M. Whittaker et al., “AI Now report 2018” (AI Now Institute, 2018); https://ainowinstitute.org/AI_Now_2018_Report.pdf.
GOOGLE SCHOLAR
4 S. Thiebes, S. Lins, A. Sunyaev, Electron. Mark. 31, 447 (2021).
CROSSREF | GOOGLE SCHOLAR
5 P. Xiong et al., arXiv:2101.03042 (2021).
GOOGLE SCHOLAR
6 S. Hua, H. Belfield, Yale J. Law Technol. 23, 415 (2021).
GOOGLE SCHOLAR
7 R. Bell, ACM Int. Conf. Proceed. Ser. 162, 3 (2006).
GOOGLE SCHOLAR
8 International Organization for Standardization (ISO), “Report on standardisation prospective for automated vehicles (RoSPAV)” (ISO/TC 22 Road Vehicles, ISO, 2021; https://isotc.iso.org/livelink/livelink/fetch/-8856347/8856365/8857493/ISO_TC22_RoSPAV.pdf.
GOOGLE SCHOLAR
9 P. Kairouz et al., arXiv:1912.04977 (2019).
GOOGLE SCHOLAR
10 in A. Dwork et al., Theory of Cryptography Conference (Springer, 2006), pp. 265–284.
GOOGLE SCHOLAR
11 G. Falco et al., Nat. Mach. Intell. 3, 566 (2021).
CROSSREF | GOOGLE SCHOLAR
12 A. Askell, M. Brundage, G. Hadfield, arXiv:1907.04534 (2019).
GOOGLE SCHOLAR
13 S. McGregor, arXiv:2011.08512 (2020).
GOOGLE SCHOLAR
14 S. McGregor, The first taxonomy of AI incidents (Partnership on AI, 2021); https://incidentdatabase.ai/blog/the-first-taxonomy-of-ai-incidents.
GOOGLE SCHOLAR
15 European Commission, “Proposal for a regulation of the European parliament and of the council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts” (COM/2021/206 final, European Commission, 2021); https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206.
GOOGLE SCHOLAR