Each month, The Existential Risk Research Assessment (TERRA) uses a unique machine-learning model to predict those publications most relevant to existential risk or global catastrophic risk. All previous updates can be found here. The following are a selection of those papers identified this month.
Please note that we provide these citations and abstracts as a service to aid other researchers in paper discovery and that inclusion does not represent any kind of endorsement of this research by the Centre for the Study of Existential Risk or our researchers.
An intelligent machine surpassing human intelligence across a wide set of skills has been proposed as a possible existential catastrophe (i.e., an event comparable in value to that of human extinction). Among those concerned about existential risk related to artificial intelligence (AI), it is common to assume that AI will not only be very intelligent, but also be a general agent (i.e., an agent capable of action in many different contexts). This article explores the characteristics of machine agency, and what it would mean for a machine to become a general agent. In particular, it does so by articulating some important differences between belief and desire in the context of machine agency. One such difference is that while an agent can by itself acquire new beliefs through learning, desires need to be derived from preexisting desires or acquired with the help of an external influence. Such influence could be a human programmer or natural selection. We argue that to become a general agent, a machine needs productive desires, or desires that can direct behavior across multiple contexts. However, productive desires cannot sui generis be derived from non-productive desires. Thus, even though general agency in AI could in principle be created by human agents, general agency cannot be spontaneously produced by a non-general AI agent through an endogenous process (i.e. self-improvement). In conclusion, we argue that a common AI scenario, where general agency suddenly emerges in a non-general agent AI, such as DeepMind’s superintelligent board game AI AlphaZero, is not plausible.
The Sustainable Development Goals (SDGs) have now been in place for 4 years, as the center-piece of the sustainable development program of the United Nations. This paper argues that the Earth system fundamentally represents the organizational framework of the planet and, therefore, any attempt at avoiding the existential threat to humanity that our activities are creating must be integrated within this system. We examine how complex systems function in order to identify the key characteristics that any sustainability policy must possess in order to deliver successful, long-term coexistence of humanity within the biosphere. We then examine what this means in terms of the SDGs, currently the dominant policy document on global sustainability and lying at the heart of Agenda 30. The paper explores what a sustainable program of actions, aimed at properly integrating within the Earth system, should look like, and what changes are needed if humanity is to address the multiple challenges facing us, based on systems theory. Central to this is the acknowledgement of shortcomings in current policy and the urgent need to address these in practice.
Does the technological capability for “self-destruction” grow faster than the political capacity to control and restraint it? If so, then the uneven growth rates between technology and politics could provide a theoretical explanation for the “Fermi Paradox”—or the contradiction between the high probability of the existence of intelligent life, and the absence of empirical evidence for it “out there” in the universe. This paper postulates the anarchy-technology dilemma as a solution to the Fermi Paradox: in essence, intelligent civilizations develop the technological capability to destroy themselves before establishing the political structures to prevent their self-destruction.
An apocalyptic zeitgeist infuses global life, yet this is only minimally reflected in International Relations (IR) debates about the future of world order and implications of climate change. Instead, most approaches within these literatures follow what I call a “continuationist” bias, which assumes that past trends of economic growth and inter-capitalist competition will continue indefinitely into the future. I identify three key reasons for this assumption: 1) a lack of engagement with evidence that meeting the Paris Agreement targets is incompatible with continuous economic growth; 2) an underestimation of the possibility that failure to meet these targets will unleash irreversible tipping points in the earth system, and 3) limited consideration of the ways climate change will converge with economic stagnation, financial instability, and food system vulnerabilities to intensify systemic risks to the global economy in the near-term and especially later this century. I argue that IR scholars should therefore explore the potential for ‘post-growth’ world orders to stabilize the climate system, consider how world order may adapt to a three or four degree world if Paris Agreement targets are exceeded, and investigate the possible dynamics of global ‘collapse’ in case runaway climate change overwhelms collective adaptation capacities during this century.
I shall discuss, from a personal perspective, research on risk perception that has created an understanding of the dynamic interplay between an appreciation of risk that resides in us as a feeling and an appreciation of risk that results from analysis. In some circumstances, feelings reflect important social values that deserve to be considered along with traditional analyses of physical and economic risk. In other situations, both feelings and analyses may be shaped by powerful cognitive biases and deep social and partisan prejudices, causing nonrational judgments and decisions. This is of concern if risk analysis is to be applied, as it needs to be, in managing existential threats such as pandemic disease, climate change, or nuclear weapons amidst a divisive political climate.
In recent years, with the rapid development of the economy of various countries and the vigorous development of all walks of life, as one of the main raw materials of the products of major enterprises, the use of oil has also increased, but at the same time, it has caused a lot of repairable and irreparable hazards. This paper expounds the importance of oil for economic development and the current situation of oil, which is also strong. The importance of environmental protection is adjusted. The earth is the home for human beings to survive. The improvement on the natural environment is one of the bases of human survival. The restrictive factors of oil on the implementation of environmental protection policies are listed. After that, I put forward some suggestions, hoping that enterprises or governments can increase business investment in renewable energy, slow down the negative pressure of petroleum on the environment, and promote the harmonious development of human and nature.
The coronavirus crisis has been an unexpected stress test for our societies. Increasing the social resilience against future existential risks -from new pathogens to asteroids or changes in the climate- should be, along with the mitigation efforts, one of the main objectives after overcoming this pandemic. In this article, we review some issues related to the vulnerability and preparedness of societies against natural risk and, specifically, against the coronavirus crisis such as the response capacity of the government and the overall society, risk communication or leadership. The article aims to contribute to fostering reflection, analysis and future debate on preparedness for natural risk.