Each month, The Existential Risk Research Assessment (TERRA) uses a unique machine-learning model to predict those publications most relevant to existential risk or global catastrophic risk. The following are a selection of those papers identified this month.
Please note that we provide these citations and abstracts as a service to aid other researchers in paper discovery and that inclusion does not represent any kind of endorsement of this research by the Centre for the Study of Existential Risk or our researchers.
To prevent catastrophic asteroid–Earth collisions, it has been proposed to use nuclear explosives to deflect away earthbound asteroids. However, this policy of nuclear deflection could inadvertently increase the risk of nuclear war and other violent conflict. This article conducts risk–risk tradeoff analysis to assess whether nuclear deflection results in a net increase or decrease in risk. Assuming nonnuclear deflection options are also used, nuclear deflection may only be needed for the largest and most imminent asteroid collisions. These are low-frequency, high-severity events. The effect of nuclear deflection on violent conflict risk is more ambiguous due to the complex and dynamic social factors at play. Indeed, it is not clear whether nuclear deflection would cause a net increase or decrease in violent conflict risk. Similarly, this article cannot reach a precise conclusion on the overall risk–risk tradeoff. The value of this article comes less from specific quantitative conclusions and more from providing an analytical framework and a better overall understanding of the policy decision. The article demonstrates the importance of integrated analysis of global risks and the policies to address them, as well as the challenge of quantitative evaluation of complex social processes such as violent conflict.
(Translated abstract) An article addresses the issue of positive alarmism as the first step to articulate an alarming topic of interconnected social and ecological conflicts. While the social and environmental conflicts are in practice interlinked, analyses of them are usually separated in theory. The article stresses the interconnections. It reflects the fact that civilizations, and especially modern societies in the West, created both development as well as destruction. In order to formulate a normative solution from a perspective of global critical thinking based on the specific macroregional civilizations, the article articulates a methodological move from the dialectic of enlightenment to intersubjective relations among human beings and the nature to overcome the threats of global capitalism and a potential collapse.
3. Sitnicki, I. (2018). Why AI shall emerge in the one of possible worlds?. AI & SOCIETY, 1-7.
The aim of this paper is to present some philosophical considerations about the supposed AI emergence in the future. However, the predicted timeline of this process is uncertain. To avoid any kind of speculations on the proposed analysis from a scientific point of view, a metaphysical approach is undertaken as a modal context of the discussion. I argue that modal claim about possible AI emergence at a certain point of time in the future is justified from a temporal perspective. Therefore, worldwide society must be prepared for possible AI emergence and the expected profound impact of such an event on the existential status of humanity.
The purpose of this paper is to address both the evolutionary and control aspects associated with the management of artificial superintelligence. Through empirical analysis, the authors examine the diffusion pattern of those high technologies that can be considered as forerunners to the adoption of artificial superintelligence (ASI). The evolutionary perspective is divided into three parts, based on major developments in this area, namely, robotics, automation and artificial intelligence (AI). The authors then provide several dynamic models of the possible future evolution of superintelligence. These include diffusion modeling, predator–prey models and hostility models. The problem of control in superintelligence is reviewed next, where the authors discuss Asimov’s Laws and IEEE initiative. The authors also provide an empirical analysis of the application of diffusion modeling to three technologies from the industries of manufacturing, communication and energy, which can be considered as potential precursors to the evolution of the field of ASI. The authors conclude with a case study illustrating emerging solutions in the form of long-term social experiments to address the problem of control in superintelligence. Findings: The results from the empirical analysis of the manufacturing, communication and energy sectors suggest that the technology diffusion model fits well with the data of robotics, telecom and solar installations till date. The results suggest a gradual diffusion process, like any other high technology. Thus, there appears to be no threat of “existential catastrophe” (Bostrom, 2014). The case study indicates that any future threat can be pre-empted by some long-term social measures. Originality/value: This paper contributes to the emerging stream of artificial superintelligence. As humanity comes closer to grappling with the important question of the management and control of this technology for the future, it is important that modeling efforts be made to understand the extant perspective of the development of the high-technology diffusion. Presently, there are relatively few such efforts available in the literature.
5. Stefánsson, H. O. (2019). On the limits of the precautionary principle. Risk Analysis.
The precautionary principle (PP) is an influential principle of risk management. It has been widely introduced into environmental legislation, and it plays an important role in most international environmental agreements. Yet, there is little consensus on precisely how to understand and formulate the principle. In this article I prove some impossibility results for two plausible formulations of the PP as a decision-rule. These results illustrate the difficulty in making the PP consistent with the acceptance of any tradeoffs between catastrophic risks and more ordinary goods. How one interprets these results will, however, depend on one's views and commitments. For instance, those who are convinced that the conditions in the impossibility results are requirements of rationality may see these results as undermining the rationality of the PP. But others may simply take these results to identify a set of purported rationality conditions that defenders of the PP should not accept, or to illustrate types of situations in which the principle should not be applied.