Select Page

6 Nov 2025 | 0 comments

“If you upgrade that assistant to see video at 1 FPS—think Meta’s glasses—the back-of-the-envelope says you'd need to roughly 10× the grid to accommodate that for everyone. If you upgrade the text assistant to reason at the level of models working on the ARC AGI benchmark (IQ-test-style problems), even just the text assistant would require around a 10× of today’s grid—around 5 terawatts—insanity.” - Trevor McCourt - Extropic CTO

“If you upgrade that assistant to see video at 1 FPS – think Meta’s glasses… you’d need to roughly 10× the grid to accommodate that for everyone. If you upgrade the text assistant to reason at the level of models working on the ARC AGI benchmark… even just the text assistant would require around a 10× of today’s grid.” – Trevor McCourt – Extropic CTO

The quoted remark by Trevor McCourt, CTO of Extropic, underscores a crucial bottleneck in artificial intelligence scaling: energy consumption outpaces technological progress in compute efficiency, threatening the viability of universal, always-on AI. The quote translates hard technical extrapolation into plain language—projecting that if every person were to have a vision-capable assistant running at just 1 video frame per second, or if text models achieved a level of reasoning comparable to ARC AGI benchmarks, global energy infrastructure would need to multiply several times over, amounting to many terawatts—figures that quickly reach into economic and physical absurdity.

Backstory and Context of the Quote & Trevor McCourt

Trevor McCourt is the co-founder and Chief Technology Officer of Extropic, a pioneering company targeting the energy barrier limiting mass-market AI deployment. With multidisciplinary roots—a blend of mechanical engineering and quantum programming, honed at the University of Waterloo and Massachusetts Institute of Technology—McCourt contributed to projects at Google before moving to the hardware-software frontier. His leadership at Extropic is defined by a willingness to challenge orthodoxy and champion a first-principles, physics-driven approach to AI compute architecture.

The quote arises from a keynote on how present-day large language models and diffusion AI models are fundamentally energy-bound. McCourt’s analysis is rooted in practical engineering, economic realism, and deep technical awareness: the computational demands of state-of-the-art assistants vastly outstrip what today’s grid can provide if deployed at population scale. This is not merely an engineering or machine learning problem, but a macroeconomic and geopolitical dilemma.

Extropic proposes to address this impasse with Thermodynamic Sampling Units (TSUs)—a new silicon compute primitive designed to natively perform probabilistic inference, consuming orders of magnitude less power than GPU-based digital logic. Here, McCourt follows the direction set by energy-based probabilistic models and advances it both in hardware and algorithm.

McCourt’s career has been defined by innovation at the technical edge: microservices in cloud environments, patented improvements to dynamic caching in distributed systems, and research in scalable backend infrastructure. This breadth, from academic research to commercial deployment, enables his holistic critique of the GPU-centred AI paradigm, as well as his leadership at Extropic’s deep technology startup.

Leading Theorists & Influencers in the Subject

Several waves of theory and practice converge in McCourt’s and Extropic’s work:

1. Geoffrey Hinton (Energy-Based and Probabilistic Models):
Long before deep learning’s mainstream embrace, Hinton’s foundational work on Boltzmann machines and energy-based models explored the idea of learning and inference as sampling from complex probability distributions. These early probabilistic paradigms anticipated both the difficulties of scaling and the algorithmic challenges that underlie today’s generative models. Hinton’s recognition—including the Nobel Prize for work on energy-based models—cements his stature as a theorist whose footprints underpin Extropic’s approach.

2. Michael Frank (Reversible Computing)
Frank is a prominent physicist in reversible and adiabatic computing, having led major advances at MIT, Sandia National Laboratories, and others. His research investigates how the physics of computation can reduce the fundamental energy cost—directly relevant to Extropic’s mission. Frank’s focus on low-energy information processing provides a conceptual environment for approaches like TSUs to flourish.

3. Chris Bishop & Yoshua Bengio (Probabilistic Machine Learning):
Leaders like Bishop and Bengio have shaped the field’s probabilistic foundations, advocating both for deep generative models and for the practical co-design of hardware and algorithms. Their research has stressed the need to reconcile statistical efficiency with computational tractability—a tension at the core of Extropic’s narrative.

4. Alan Turing & John von Neumann (Foundations of Computing):
While not direct contributors to modern machine learning, the legacies of Turing and von Neumann persist in every conversation about alternative architectures and the physical limits of computation. The post-von Neumann and post-Turing trajectory, with a return to analogue, stochastic, or sampling-based circuitry, is directly echoed in Extropic’s work.

5. Recent Industry Visionaries (e.g., Sam Altman, Jensen Huang):
Contemporary leaders in the AI infrastructure space—such as Altman of OpenAI and Huang of Nvidia—have articulated the scale required for AGI and the daunting reality of terawatt-scale compute. Their business strategies rely on the assumption that improved digital hardware will be sufficient, a view McCourt contests with data and physical models.

Strategic & Scientific Context for the Field

  • Core problem: The energy that powers AI is reaching non-linear scaling—mass-market AI could consume a significant fraction or even multiples of the entire global grid if naively scaled with today’s architectures.
  • Physics bottlenecks: Improvements in digital logic are limited by physical constants: capacitance, voltage, and the energy required for irreversible computation. Digital logic has plateaued at the 10nm node.
  • Algorithmic evolution: Traditional deep learning is rooted in deterministic matrix computations, but the true statistical nature of intelligence calls for sampling from complex distributions—as foregrounded in Hinton’s work and now implemented in Extropic’s TSUs.
  • Paradigm shift: McCourt and contemporaries argue for a transition to native hardware–software co-design where the core computational primitive is no longer the multiply–accumulate (MAC) operation, but energy-efficient probabilistic sampling.

Summary Insight

Trevor McCourt anchors his cautionary prognosis for AI’s future on rigorous cross-disciplinary insights—from physical hardware limits to probabilistic learning theory. By combining his own engineering prowess with the legacy of foundational theorists and contemporary thinkers, McCourt’s perspective is not simply one of warning but also one of opportunity: a new generation of probabilistic, thermodynamically-inspired computers could rewrite the energy economics of artificial intelligence, making “AI for everyone” plausible—without grid-scale insanity.

Download brochure

Introduction brochure

What we do, case studies and profiles of some of our amazing team.

Download

Our latest podcasts on Spotify
Global Advisors | Quantified Strategy Consulting