Abstract
We introduce the fundamental ideas and challenges of “Predictable AI”,
a nascent research area that explores the ways in which we can anticipate key
indicators of present and future AI ecosystems. We argue that achieving
predictability is crucial for fostering trust, liability, control, alignment and safety
of AI ecosystems, and thus should be prioritised over performance. While
distinctive from other areas of technical and non-technical AI research, the
questions, hypotheses and challenges relevant to “Predictable AI” were yet to be
clearly described. This paper aims to elucidate them, calls for identifying paths
towards AI predictability and outlines the potential impact of this emergent field.