“The countries that control compute will control AI. You cannot have compute without energy.” – Jonathan Ross – CEO Groq
Jonathan Ross stands at the intersection of geopolitics, energy economics, and technological determinism. As founder and CEO of Groq, the Silicon Valley firm challenging Nvidia’s dominance in AI infrastructure, Ross articulated a proposition of stark clarity during his September 2025 appearance on Harry Stebbings’ 20VC podcast: “The countries that control compute will control AI. You cannot have compute without energy.”
This observation transcends technical architecture. Ross is describing the emergence of a new geopolitical currency—one where computational capacity, rather than traditional measures of industrial might, determines economic sovereignty and strategic advantage in the 21st century. His thesis rests on an uncomfortable reality: artificial intelligence, regardless of algorithmic sophistication or model architecture, cannot function without the physical substrate of compute. And compute, in turn, cannot exist without abundant, reliable energy.
The Architecture of Advantage
Ross’s perspective derives from direct experience building the infrastructure that powers modern AI. At Google, he initiated what became the Tensor Processing Unit (TPU) project—custom silicon that allowed the company to train and deploy machine learning models at scale. This wasn’t academic research; it was the foundation upon which Google’s AI capabilities were built. When Amazon and Microsoft attempted to recruit him in 2016 to develop similar capabilities, Ross recognised a pattern: the concentration of advanced AI compute in too few hands represented a strategic vulnerability.
His response was to establish Groq in 2016, developing Language Processing Units optimised for inference—the phase where trained models actually perform useful work. The company has since raised over $3 billion and achieved a valuation approaching $7 billion, positioning itself as one of Nvidia’s most credible challengers in the AI hardware market. But Ross’s ambitions extend beyond corporate competition. He views Groq’s mission as democratising access to compute—creating abundant supply where artificial scarcity might otherwise concentrate power.
The quote itself emerged during a discussion about global AI competitiveness. Ross had been explaining why European nations, despite possessing strong research talent and model development capabilities (Mistral being a prominent example), risk strategic irrelevance without corresponding investment in computational infrastructure and energy capacity. A brilliant model without compute to run it, he argued, will lose to a mediocre model backed by ten times the computational resources. This isn’t theoretical—it’s the lived reality of the current AI landscape, where rate limits and inference capacity constraints determine what services can scale and which markets can be served.
The Energy Calculus
The energy dimension of Ross’s statement carries particular weight. Modern AI training and inference require extraordinary amounts of electrical power. The hyperscalers—Google, Microsoft, Amazon, Meta—are each committing tens of billions of dollars annually to AI infrastructure, with significant portions dedicated to data centre construction and energy provision. Microsoft recently announced it wouldn’t make certain GPU clusters available through Azure because the company generated higher returns using that compute internally rather than renting it to customers. This decision, more than any strategic presentation, reveals the economic value density of AI compute.
Ross draws explicit parallels to the early petroleum industry: a period of chaotic exploration where a few “gushers” delivered extraordinary returns whilst most ventures yielded nothing. In this analogy, compute is the new oil—a fundamental input that determines economic output and strategic positioning. But unlike oil, compute demand doesn’t saturate. Ross describes AI demand as “insatiable”: if OpenAI or Anthropic received twice their current inference capacity, their revenue would nearly double within a month. The bottleneck isn’t customer appetite; it’s supply.
This creates a concerning dynamic for nations without indigenous energy abundance or the political will to develop it. Ross specifically highlighted Europe’s predicament: impressive AI research capabilities undermined by insufficient energy infrastructure and regulatory hesitance around nuclear power. He contrasted this with Norway’s renewable capacity (80% wind utilisation) or Japan’s pragmatic reactivation of nuclear facilities—examples of countries aligning energy policy with computational ambition. The message is uncomfortable but clear: technical sophistication in model development cannot compensate for material disadvantage in energy and compute capacity.
Strategic Implications
The geopolitical dimension becomes more acute when considering China’s position. Ross noted that whilst Chinese models like DeepSeek may be cheaper to train (through various optimisations and potential subsidies), they remain more expensive to run at inference—approximately ten times more costly per token generated. This matters because inference, not training, determines scalability and market viability. China can subsidise AI deployment domestically, but globally—what Ross terms the “away game”—cost structure determines competitiveness. Countries cannot simply construct nuclear plants at will; energy infrastructure takes decades to build.
This asymmetry creates opportunity for nations with existing energy advantages. The United States, despite higher nominal costs, benefits from established infrastructure and diverse energy sources. However, Ross’s framework suggests this advantage is neither permanent nor guaranteed. Control over compute requires continuous investment in both silicon capability and energy generation. Nations that fail to maintain pace risk dependency—importing not just technology, but the capacity for economic and strategic autonomy.
The corporate analogy proves instructive. Ross predicts that every major AI company—OpenAI, Anthropic, Google, and others—will eventually develop proprietary chips, not necessarily to outperform Nvidia technically, but to ensure supply security and strategic control. Nvidia currently dominates not purely through superior GPU architecture, but through control of high-bandwidth memory (HBM) supply chains. Building custom silicon allows organisations to diversify supply and avoid allocation constraints that might limit their operational capacity. What applies to corporations applies equally to nations: vertical integration in compute infrastructure is increasingly a prerequisite for strategic autonomy.
The Theorists and Precedents
Ross’s thesis echoes several established frameworks in economic and technological thought, though he synthesises them into a distinctly contemporary proposition.
Harold Innis, the Canadian economic historian, developed the concept of “staples theory” in the 1930s and 1940s—the idea that economies organised around the extraction and export of key commodities (fur, fish, timber, oil) develop institutional structures, trade relationships, and power dynamics shaped by those materials. Innis later extended this thinking to communication technologies in works like Empire and Communications (1950) and The Bias of Communication (1951), arguing that the dominant medium of a society shapes its political and social organisation. Ross’s formulation applies Innisian logic to computational infrastructure: the nations that control the “staples” of the AI economy—energy and compute—will shape the institutional and economic order that emerges.
Carlota Perez, the Venezuelan-British economist, provided a framework for understanding technological revolutions in Technological Revolutions and Financial Capital (2002). Perez identified how major technological shifts (steam power, railways, electricity, mass production, information technology) follow predictable patterns: installation phases characterised by financial speculation and infrastructure building, followed by deployment phases where the technology becomes economically productive. Ross’s observation about current AI investment—massive capital expenditure by hyperscalers, uncertain returns, experimental deployment—maps cleanly onto Perez’s installation phase. The question, implicit in his quote, is which nations will control the infrastructure when the deployment phase arrives and returns become tangible.
W. Brian Arthur, economist and complexity theorist, articulated the concept of “increasing returns” in technology markets through works like Increasing Returns and Path Dependence in the Economy (1994). Arthur demonstrated how early advantages in technology sectors compound through network effects, learning curves, and complementary ecosystems—creating winner-take-most dynamics rather than the diminishing returns assumed in classical economics. Ross’s emphasis on compute abundance follows this logic: early investment in computational infrastructure creates compounding advantages in AI capability, which drives economic returns, which fund further compute investment. Nations entering this cycle late face escalating barriers to entry.
Joseph Schumpeter, the Austrian-American economist, introduced the concept of “creative destruction” in Capitalism, Socialism and Democracy (1942)—the idea that economic development proceeds through radical innovation that renders existing capital obsolete. Ross explicitly invokes Schumpeterian dynamics when discussing the risk that next-generation AI chips might render current hardware unprofitable before it amortises. This uncertainty amplifies the strategic calculus: nations must invest in compute infrastructure knowing that technological obsolescence might arrive before economic returns materialise. Yet failing to invest guarantees strategic irrelevance.
William Stanley Jevons, the 19th-century English economist, observed what became known as Jevons Paradox in The Coal Question (1865): as technology makes resource use more efficient, total consumption typically increases rather than decreases because efficiency makes the resource more economically viable for new applications. Ross applies this directly to AI compute, noting that as inference becomes cheaper (through better chips or more efficient models), demand expands faster than costs decline. This means the total addressable market for compute grows continuously—making control over production capacity increasingly valuable.
Nicholas Georgescu-Roegen, the Romanian-American economist, pioneered bioeconomics and introduced entropy concepts to economic analysis in The Entropy Law and the Economic Process (1971). Georgescu-Roegen argued that economic activity is fundamentally constrained by thermodynamic laws—specifically, that all economic processes dissipate energy and cannot be sustained without continuous energy inputs. Ross’s insistence that “you cannot have compute without energy” is pure Georgescu-Roegen: AI systems, regardless of algorithmic elegance, are bound by physical laws. Compute is thermodynamically expensive—training large models requires megawatts, inference at scale requires sustained power generation. Nations without access to abundant energy cannot sustain AI economies, regardless of their talent or capital.
Mancur Olson, the American economist and political scientist, explored collective action problems and the relationship between institutional quality and economic outcomes in works like The Rise and Decline of Nations (1982). Olson demonstrated how established interest groups can create institutional sclerosis that prevents necessary adaptation. Ross’s observations about European regulatory hesitance and infrastructure underinvestment reflect Olsonian dynamics: incumbent energy interests, environmental lobbies, and risk-averse political structures prevent the aggressive nuclear or renewable expansion required for AI competitiveness. Meanwhile, nations with different institutional arrangements (or greater perceived strategic urgency) act more decisively.
Paul Romer, the American economist and Nobel laureate, developed endogenous growth theory, arguing in works like “Endogenous Technological Change” (1990) that economic growth derives from deliberate investment in knowledge and technology rather than external factors. Romer’s framework emphasises the non-rivalry of ideas (knowledge can be used by multiple actors simultaneously) but the rivalry of physical inputs required to implement them. Ross’s thesis fits perfectly: AI algorithms can be copied and disseminated, but the computational infrastructure to deploy them at scale cannot. This creates a fundamental asymmetry that determines economic power.
The Historical Pattern
History provides sobering precedents for resource-driven geopolitical competition. Britain’s dominance in the 19th century rested substantially on coal abundance that powered industrial machinery and naval supremacy. The United States’ 20th-century ascendance correlated with petroleum access and the industrial capacity to refine and deploy it. Oil-dependent economies in the Middle East gained geopolitical leverage disproportionate to their population or industrial capacity purely through energy reserves.
Ross suggests we are witnessing the emergence of a similar dynamic, but with a critical difference: AI compute is both resource-intensive (requiring enormous energy) and productivity-amplifying (making other economic activity more efficient). This creates a multiplicative effect where compute advantages compound through both direct application (better AI services) and indirect effects (more efficient production of goods and services across the economy). A nation with abundant compute doesn’t just have better chatbots—it has more efficient logistics, agricultural systems, manufacturing processes, and financial services.
The “away game” concept Ross introduced during the podcast discussion adds a critical dimension. China, despite substantial domestic AI investment and capabilities, faces structural disadvantages in global competition because international customers cannot simply replicate China’s energy subsidies or infrastructure. This creates opportunities for nations with more favourable cost structures or energy profiles, but only if they invest in both compute capacity and energy generation.
The Future Ross Envisions
Throughout the podcast, Ross painted a vision of AI-driven abundance that challenges conventional fears of technological unemployment. He predicts labour shortages, not mass unemployment, driven by three mechanisms: deflationary pressure (AI makes goods and services cheaper), workforce opt-out (people work less as living costs decline), and new industry creation (entirely new job categories emerge, like “vibe coding”—programming through natural language rather than formal syntax).
This optimistic scenario depends entirely on computational abundance. If compute remains scarce and concentrated, AI benefits accrue primarily to those controlling the infrastructure. Ross’s mission with Groq—creating faster deployment cycles (six months versus two years for GPUs), operating globally distributed data centres, optimising for cost efficiency rather than margin maximisation—aims to prevent that concentration. But the same logic applies at the national level. Countries without indigenous compute capacity will import AI services, capturing some productivity benefits but remaining dependent on external providers for the infrastructure that increasingly underpins economic activity.
The comparison Ross offers—LLMs as “telescopes of the mind”—is deliberately chosen. Galileo’s telescope revolutionised human understanding but required specific material capabilities to construct and use. Nations without optical manufacturing capacity could not participate in astronomical discovery. Similarly, nations without computational and energy infrastructure cannot participate fully in the AI economy, regardless of their algorithmic sophistication or research talent.
Conclusion
Ross’s statement—”The countries that control compute will control AI. You cannot have compute without energy”—distils a complex geopolitical and economic reality into stark clarity. It combines Innisian materialism (infrastructure determines power), Schumpeterian dynamism (innovation renders existing capital obsolete), Jevonsian counterintuition (efficiency increases total consumption), and Georgescu-Roegen’s thermodynamic constraints (economic activity requires energy dissipation).
The implications are uncomfortable for nations unprepared to make the necessary investments. Technical prowess in model development provides no strategic moat if the computational infrastructure to deploy those models remains controlled elsewhere. Energy abundance, or the political will to develop it, becomes a prerequisite for AI sovereignty. And AI sovereignty increasingly determines economic competitiveness across sectors.
Ross occupies a unique vantage point—neither pure academic nor disinterested observer, but an operator building the infrastructure that will determine whether his prediction proves correct. Groq’s valuation and customer demand suggest the market validates his thesis. Whether nations respond with corresponding urgency remains an open question. But the framework Ross articulates will likely define strategic competition for the remainder of the decade: compute as currency, energy as prerequisite, and algorithmic sophistication as necessary but insufficient for competitive advantage.

