Key concepts
Intelligence Rising is a strategic simulation of AI futures. It currently takes the form of a training workshop that lets present and future decision-makers experience the tensions and risks that can emerge in the highly competitive environment of AI development through an educational roleplay game.
Participants embody characters such as elected officials and their AI advisors in major states and CEOs and their executive teams at leading technology firms. They try to emulate how these actors might behave in the pursuit of AI development over the coming decades and explore the implications of their choices.
The tool was developed from the practise of ‘serious games’ or ‘simulation games’, used for strategic planning by policy makers and organizations (and derived from the military practise of ‘war games’). Like these tools, Intelligence Rising has been conceived to be generative and structured. Adversarial interactions between teams with different, but conflicting, objectives, in a decade-long scenario where AI technologies can advance rapidly along academically-informed pathways, create possible futures that non-expert are unlikely to have imagined outside of this exercise.
Intelligence rising focuses on the potential development of Radically Transformative Artificial Intelligence (RTAI). This refers to AI technologies or applications with potential to lead to practically irreversible change that is 1) broad enough to impact most important aspects of life and society and 2) extreme enough to result in radical changes to the metrics used to measure human progress and well-being, or to result in reversal of societal trends previously thought of as practically irreversible.
Purpose of the tool
Intelligence Rising aims to perform three important functions:
Making RTAI, and the risks associated with it, feel more plausible, especially to government and corporate policy teams and students in AI and technology governance.
Providing these audiences with key lessons from the growing AI governance literature.
Studying how these audiences perceive and think about complex and adversarial actions/interactions, path-dependencies, and mid-term futures associated with developing AI.
Background
The tool was was originally concieved by Dr. Shahar Avin from the CSER. It was meant to encourage participants to confront the challenges posed by imminent advances in Artificial Intelligence. It was inspired by a visit in 2017 to the Future of Humanity Institute, where he participated in a 3-person AI futures roleplay (which did not end well). Initially an unnamed project, it became "Intelligence Rising" after Shahar received a seed grant and started a collaboration with both academics and non-academics across several universities and institutions.
The design was developed by a group of researchers from the Universities of Cambridge, Oxford and Wichita State, to allow decision-makers to stress-test their assumptions, and develop a stronger sense of the possible impacts and meaning of the gradual development of AI. The team included researchers in AI governance, strategy, and futures, AI research, Evaluation and Risk Communication, and Game Design
The tool has evolved five distinct versions, each meant for different audiences:
- an online role-play game
- a tabletop role-play game
- an educational board game
- a free-form role-play for large groups
- a free-form role play for training AI researchers and high-level decision makers.
In 2022 a charity, Technology Strategy Roleplay, was established to oversee the ongoing development and deployment of this tool.
History of the tool’s development
Since 2018, the tool has undergone several stages of development.
It started out as a freeform role play, in which participants took the part of a wide range of powerful actors and had free choice over the actions that they took. Decisions were made in turns, representing two-year periods of time, and an expert facilitator provided a narrative resolution to all actions taken, whether publicly or in secret. The scenario would typically continue until the emergence of RTAI.
In phrase two, various aspects of the tool were codified, including participants assigned actors, their powers and resources, research and development possibilities, and the action resolution and random event generation mechanisms. These changes made experiences more consistent, enabled a wider range of participants to fully engage with the tool, and required less expertise for exercise facilitation.
More recently, the tool has been further developed by creating possibilities for on-line participation, adding victory conditions and concerns, and reducing the complexity of technology development and the resolution of unusual actions. A core goal of these changes has been to ensure tool is focused on achieving its primary objectives and to allow for improved evaluation of exercises.
Exercise evaluation is performed via participant surveys (pre and post), in game observation, and assessments by the facilitator. These have been used to produce a series of academic publications around the tool and its implementation, which will be published shortly.
A paper presenting an early-stage version of the tool was also presented at AIES ’20: https://dl.acm.org/doi/abs/10.1145/3375627.3375817
Where to get started
Intelligence rising has been implemented by a range of organizations, including successive cohorts of AI PhD students and at leading AI laboratories, and we are keen to work with a wide range of partners in running future exercises. Please reach out to us for quotes or more information, via email
Exercises typically include between four and twelve players and typically last about four hours. A standard exercise tends to take around four hours, including introductions and summaries. Every round of participant actions represent two years in-game, with exercises frequently spanning ten to fourteen years, covering the global dynamics of competing AI laboratories and state actors into the 2030’s. Exercises can also be tailored to the needs of your organization, and we are happy to work with you to find custom solutions that can improve your team’s foresight on the issue of transformative AI.
More information can be found at https://www.intelligencerising.org/