Taking a principled approach is crucial to the successful use of AI in pandemic management, say Stephen Cave and colleagues.
Abstract
In a crisis such as the covid-19 pandemic, governments and health services must act quickly and decisively to stop the spread of the disease. Artificial intelligence (AI), which in this context largely means increasingly powerful data driven algorithms, can be an important part of that action—for example, by helping to track the progress of a virus or to prioritise scarce resources.1 To save lives it might be tempting to deploy these technologies at speed and scale. Deployment of AI can affect a wide range of fundamental values, however, such as autonomy, privacy, and fairness. AI is much more likely to be beneficial, even in urgent situations, if those commissioning, designing, and deploying it take a systematically ethical approach from the start.
Ethics is about considering the potential harms and benefits of an action in a principled way. For a widely deployed technology, this will lay a foundation of trustworthiness on which to build. Ethical deployment requires consulting widely and openly; thinking deeply and broadly about potential impacts; and being transparent about goals being pursued, trade-offs being made, and values guiding these decisions. In a pandemic, such processes should be accelerated, but not abandoned. Otherwise, two main dangers arise: firstly, the benefits of the technology could be outweighed by harmful side effects, and secondly, public trust could be lost.2
The first danger is that the potential benefits increase the incentive to deploy AI systems rapidly and at scale, but also increase the importance of an ethical approach. The speed of development limits the time available to test and assess a new technology, while the scale of deployment increases any negative consequences. Without forethought, this can lead to problems, such as a one-size-fits-all approach that harms already disadvantaged groups.3
Secondly, public trust in AI is crucial. For example, contact tracing apps rely on widespread adoption for their success.4 Both technology companies and governments, however, struggle to convince the public that they will use AI and data responsibly. After controversy over, for example, the partnership between the AI firm DeepMind and the Royal Free London NHS Foundation Trust, privacy groups have warned against plans to allow increased access to NHS data.5 Similarly, concerns have been raised in China over the Health QR code system’s distribution of data and control to private companies.6 Overpromising on the benefits of technology or relaxing ethical requirements, as has sometimes happened during this crisis,5 both risk undermining long term trust in the reputation of the entire sector. Whether potential harms become obvious immediately or only much later, adopting a consistently ethical approach from the outset will put us in a much better position to reap the full benefits of AI, both now and in the future.