23. The Singularity
How can we improve our anticipation and design of artificial superintelligence (ASI)? What are the likely consequences of the arrival of ASI?
Note: text versions of many of the key ideas in this top-level area of the Vital Syllabus are available here.
Resources providing an overall introduction to the Singularity:
“What happens when our computers get smarter than us” by Nick Bostrom
“Exoplanets and the Singularity: Why this changes everything” – by David Wood at TransVision 2021 in Madrid
“Artificial General Intelligence: Humanity’s Last Invention” by Ben Goertzel
“The Real Reason to be Afraid of Artificial Intelligence” by Peter Haas
23.1 The singularitarian stance
23.2 The singularity shadow
23.3 Different routes to superintelligence
23.4 Hard and soft take-off
23.5 Possible timescales to reach ASI
23.6 The Control Problem
Introductions to the Control Problem
“The case for taking AI seriously as a threat to humanity” – Vox article by Kelsey Piper
“Can we build AI without losing control over it?” – TED talk by Sam Harris
“Why Would AI Want to do Bad Things? Instrumental Convergence” by Robert Miles
“Intro to AI Safety”, by Robert Miles
“Why Asimov’s Laws of Robotics Don’t Work” – Robert Miles for Computerphile
Ten flawed reasons why people ignore the question of AI safety – by Robert Miles
23.7 The Alignment Problem
“Aligning AI systems with human intent” by OpenAI
23.8 Human-ASI merger
23.9 No Planet B
23.10 The singularity principles
23.11 AGI or not AGI: fundamental choices