“Hard takeoff (often referred to as an “AI FOOM” or rapid intelligence explosion) is a hypothetical scenario where an Artificial General Intelligence (AGI) improves its own source code and architecture, leading to a rapid, exponential, and runaway increase in its intelligence.” – Hard takeoff
A hard takeoff, frequently called an ‘AI FOOM’ or rapid intelligence explosion, describes a scenario in which an Artificial General Intelligence (AGI) recursively self-improves by rewriting its own source code and architecture, resulting in an exponential surge in intelligence that outpaces human control within minutes, hours, days, or at most months.1,2,3 This contrasts sharply with a soft takeoff, where intelligence grows gradually over years or decades, potentially allowing human oversight and intervention.1,2,3 The concept hinges on the premise that software-based AGI can enhance its capabilities far more swiftly than biological humans, potentially leading to superintelligence without precursors, raising profound risks of unintended behaviours or an ‘unfriendly AI’.1,3,4
The dynamics of a hard takeoff resemble compound interest or exponential growth: if an AI’s improvement rate depends on its intelligence level, capabilities escalate rapidly, akin to solving dy/dt = m y yielding y = e^{m t}, far surpassing linear progress.4 Factors influencing takeoff speed include hardware advancements relative to AGI architecture; powerful hardware enables swift self-improvement, while slower hardware or real-world feedback dependencies favour soft takeoffs.2 Proponents argue that, with proper value alignment, a hard takeoff could be less disruptive, executed with superior precision.3
Critics like J. Storrs Hall question ‘overnight’ scenarios, suggesting they assume hyperhuman starting capabilities, while Ben Goertzel posits a ‘semihard’ takeoff over about five years as plausible, involving wealth accumulation and societal integration before superintelligence.1
Key Theorist: Eliezer Yudkowsky
**Eliezer Yudkowsky** is the preeminent theorist associated with the hard takeoff concept, coining ‘FOOM’ to depict the abrupt, uncontrollable ascent of a single AGI via recursive self-improvement, outstripping global control mechanisms.4,5 Yudkowsky, born in 1979, is a pivotal figure in AI safety and rationalism, founding the Machine Intelligence Research Institute (MIRI) in 2000 (initially Singularity Institute for Artificial Intelligence) to mitigate existential risks from misaligned superintelligence.5 A self-taught prodigy who left school at 16, he authored influential essays on LessWrong, popularising the intelligence explosion hypothesis from I.J. Good, warning that unaligned AGI could dominate humanity in a ‘hard takeoff’ scenario.4,5
Yudkowsky’s relationship to the term stems from his 2000s writings contrasting his ‘FOOM’ vision against Robin Hanson’s slower, economically distributed takeoff, emphasising local dynamics of one AGI rapidly self-bootstrapping to dominance.5 His biography reflects autodidactic intensity: diagnosed with Asperger’s, he immersed in AI, decision theory, and Bayesian reasoning, authoring Harry Potter and the Methods of Rationality (2007-2015) to propagate rational thinking. Through MIRI, he pioneered formal AI alignment research, influencing fields like value learning and logical inductors, driven by fears of hard takeoff catastrophe.4,5
References
1. https://www.nextbigfuture.com/2015/01/quantifying-and-defining-hard-versus.html
2. http://multiverseaccordingtoben.blogspot.com/2011/01/hard-takeoff-hypothesis.html
3. https://ar5iv.labs.arxiv.org/html/1704.00783
4. https://www.lesswrong.com/posts/tjH8XPxAnr6JRbh7k/hard-takeoff
5. https://www.alignmentforum.org/posts/YgNYA6pj2hPSDQiTE/distinguishing-definitions-of-takeoff
6. https://embeddedai.buzzsprout.com/2429696/episodes/16549691-ai-s-hard-takeoff-agi-in-1-6-years
7. https://edoras.sdsu.edu/~vinge/misc/ac2005/

