Select Page

News and Tools

Breaking Business News

 

Our selection of the top business news sources on the web.

Quote: Naval Ravikant – Venture Capitalist

Quote: Naval Ravikant – Venture Capitalist

“UI is pre-AI.” – Naval Ravikant – Venture Capitalist

Naval Ravikant stands as one of Silicon Valley’s most influential yet unconventional thinkers—a figure who bridges the gap between pragmatic entrepreneurship and philosophical inquiry. His observation that “UI is pre-AI” reflects a distinctive perspective on technological evolution that warrants careful examination, particularly given his track record as an early-stage investor in transformative technologies.

The Architect of Modern Startup Infrastructure

Ravikant’s influence on the technology landscape extends far beyond individual company investments. As co-founder, chairman, and former CEO of AngelList, he fundamentally altered how early-stage capital flows through the startup ecosystem. AngelList democratised access to venture funding, creating infrastructure that connected aspiring entrepreneurs with angel investors and venture capital firms on an unprecedented scale. This wasn’t merely a business achievement; it represented a structural shift in how innovation gets financed globally.

His investment portfolio reflects prescient timing and discerning judgement. Ravikant invested early in companies including Twitter, Uber, Foursquare, Postmates, Yammer, and Stack Overflow—investments that collectively generated over 70 exits and more than 10 unicorn companies. This track record positions him not as a lucky investor, but as someone with genuine pattern recognition capability regarding which technologies would matter most.

Beyond the Venture Capital Thesis

What distinguishes Ravikant from conventional venture capitalists is his deliberate rejection of the traditional founder mythology. He explicitly advocates against the “hustle mentality” that dominates startup culture, instead promoting a more holistic conception of wealth that encompasses time, freedom, and peace of mind alongside financial returns. This philosophy shapes how he evaluates opportunities and mentors founders—he considers not merely whether a business will scale, but whether it will scale without scaling stress.

His broader intellectual contributions extend through multiple channels. With more than 2.4 million followers on Twitter (X), Ravikant regularly shares aphoristic insights blending practical wisdom with Eastern philosophical traditions. His appearances on influential podcasts, particularly the Tim Ferriss Show and Joe Rogan Experience, have introduced his thinking to audiences far beyond Silicon Valley. Most notably, his “How to Get Rich (without getting lucky)” thread has become foundational reading across technology and business communities, articulating principles around leverage through code, capital, and content.

Understanding “UI is Pre-AI”

The quote “UI is pre-AI” requires interpretation within Ravikant’s broader intellectual framework and the contemporary technological landscape. The statement operates on multiple levels simultaneously.

The Literal Interpretation: User interface design and development necessarily precedes artificial intelligence implementation in most technology products. This reflects a practical observation about product development sequencing—one must typically establish how users interact with systems before embedding intelligent automation into those interactions. In this sense, the UI is the foundational layer upon which AI capabilities are subsequently layered.

The Philosophical Dimension: More provocatively, the statement suggests that how we structure human-computer interaction through interface design fundamentally shapes the possibilities for what artificial intelligence can accomplish. The interface isn’t merely a presentation layer; it represents the primary contact point between human intent and computational capability. Before AI can be genuinely useful, the interface must make that utility legible and accessible to end users.

The Investment Perspective: For Ravikant specifically, this observation carries investment implications. It suggests that companies solving user experience problems will likely remain valuable even as AI capabilities evolve, whereas companies that focus purely on algorithmic sophistication without considering user interaction may find their innovations trapped in laboratory conditions rather than deployed in markets.

The Theoretical Lineage

Ravikant’s observation sits within a longer intellectual tradition examining the relationship between interface, interaction, and technological capability.

Don Norman and Human-Centered Design: The foundational modern work on this subject emerged from Don Norman’s research at the University of California, San Diego, particularly his seminal work on design of everyday things. Norman argued that excellent product design requires intimate understanding of human cognition, perception, and behaviour. Before any technological system—intelligent or otherwise—can create value, it must accommodate human limitations and leverage human strengths through thoughtful interface design.

Douglas Engelbart and Augmentation Philosophy: Douglas Engelbart’s mid-twentieth-century work on human-computer augmentation established that technology’s primary purpose should be extending human capability rather than replacing human judgment. His thinking implied that interfaces represent the crucial bridge between human cognition and computational power. Without well-designed interfaces, the most powerful computational systems remain inert.

Alan Kay and Dynabook Vision: Alan Kay’s vision of personal computing—articulated through concepts like the Dynabook—emphasised that technology’s democratising potential depends entirely on interface accessibility. Kay recognised that computational power matters far less than whether ordinary people can productively engage with that power through intuitive interaction models.

Contemporary HCI Research: Modern human-computer interaction research builds on these foundations, examining how interface design shapes which problems users attempt to solve and how they conceptualise solutions. Researchers like Shneiderman and Plaisant have demonstrated empirically that interface design decisions have second-order effects on what users believe is possible with technology.

The Contemporary Context

Ravikant’s statement carries particular resonance in the current artificial intelligence moment. As organisations rush to integrate large language models and other AI systems into products, many commit what might be called “technology-first” errors—embedding sophisticated algorithms into user experiences that haven’t been thoughtfully designed to accommodate them.

Meaningful user interface design for AI-powered systems requires addressing distinct challenges: How do users understand what an AI system can and cannot do? How is uncertainty communicated? How are edge cases handled? What happens when the AI makes errors? These questions cannot be answered through better algorithms alone; they require interface innovation.

Ravikant’s observation thus functions as a corrective to the current technological moment. It suggests that the companies genuinely transforming industries through artificial intelligence will likely be those that simultaneously innovate in both algorithmic capability and user interface design. The interface becomes pre-AI not merely chronologically but causally—shaping what artificial intelligence can accomplish in practice rather than merely in principle.

Investment Philosophy Integration

This observation aligns with Ravikant’s broader investment thesis emphasising leverage and scalability. An excellent user interface represents exactly this kind of leverage—it scales human attention and human decision-making without requiring proportional increases in effort or resources. Similarly, artificial intelligence scaled through well-designed interfaces amplifies this effect, allowing individual users or organisations to accomplish work that previously required teams.

Ravikant’s focus on investments at seed and Series A stages across media, content, cloud infrastructure, and AI reflects implicit confidence that the foundational layer of how humans interact with technology remains unsettled terrain. Rather than assuming interface design has been solved, he appears to recognise that each new technological capability—whether cloud infrastructure or artificial intelligence—creates new design challenges and opportunities.

The quote ultimately encapsulates a distinctive investment perspective: that attention to human interaction, to aesthetics, to usability, represents not secondary ornamentation but primary technological strategy. In an era of intense focus on algorithmic sophistication, Ravikant reminds us that the interface through which those algorithms engage with human needs and human judgment represents the true frontier of technological value creation.

read more
Quote: Ilya Sutskever – Safe Superintelligence

Quote: Ilya Sutskever – Safe Superintelligence

“The robustness of people is really staggering.” – Ilya Sutskever – Safe Superintelligence

This statement, made in his November 2025 conversation with Dwarkesh Patel, comes from someone uniquely positioned to make such judgments: co-founder and Chief Scientist of Safe Superintelligence Inc., former Chief Scientist at OpenAI, and co-author of AlexNet—the 2012 paper that launched the modern deep learning era.

Sutskever’s claim about robustness points to something deeper than mere durability or fault tolerance. He is identifying a distinctive quality of human learning: the ability to function effectively across radically diverse contexts, to self-correct without explicit external signals, to maintain coherent purpose and judgment despite incomplete information and environmental volatility, and to do all this with sparse data and limited feedback. These capacities are not incidental features of human intelligence. They are central to what makes human learning fundamentally different from—and vastly superior to—current AI systems.

Understanding what Sutskever means by robustness requires examining not just human capabilities but the specific ways in which AI systems are fragile by comparison. It requires recognising what humans possess that machines do not. And it requires understanding why this gap matters profoundly for the future of artificial intelligence.

What Robustness Actually Means: Beyond Mere Reliability

In engineering and systems design, robustness typically refers to a system’s ability to continue functioning when exposed to perturbations, noise, or unexpected conditions. A robust bridge continues standing despite wind, earthquakes, or traffic loads beyond its design specifications. A robust algorithm produces correct outputs despite noisy inputs or computational errors.

But human robustness operates on an entirely different plane. It encompasses far more than mere persistence through adversity. Human robustness includes:

  1. Flexible adaptation across domains: A teenager learns to drive after ten hours of practice and then applies principles of vehicle control, spatial reasoning, and risk assessment to entirely new contexts—motorcycles, trucks, parking in unfamiliar cities. The principles transfer because they have been learned at a level of abstraction and generality that allows principled application to novel situations.
  2. Self-correction without external reward: A learner recognises when they have made an error not through explicit feedback but through an internal sense of rightness or wrongness—what Sutskever terms a “value function” and what we experience as intuition, confidence, or unease. A pianist knows immediately when they have struck a wrong note; they do not need external evaluation. This internal evaluative system enables rapid, efficient learning.
  3. Judgment under uncertainty: Humans routinely make decisions with incomplete information, tolerating ambiguity whilst maintaining coherent action. A teenager drives defensively not because they can compute precise risk probabilities but because they possess an internalized model of danger, derived from limited experience but somehow applicable to novel situations.
  4. Stability across time scales: Human goals, values, and learning integrate across vastly different temporal horizons. A person may pursue long-term education goals whilst adapting to immediate challenges, and these different time scales cohere into a unified, purposeful trajectory. This temporal integration is largely absent from current AI systems, which optimise for immediate reward signals or fixed objectives.
  5. Learning from sparse feedback: Humans learn from remarkably little data. A child sees a dog once or twice and thereafter recognises dogs in novel contexts, even in stylised drawings or unfamiliar breeds. This learning from sparse examples contrasts sharply with AI systems requiring thousands or millions of examples to achieve equivalent recognition.

This multifaceted robustness is what Sutskever identifies as “staggering”—not because it is strong but because it operates across so many dimensions simultaneously whilst remaining stable, efficient, and purposeful.

The Fragility of Current AI: Why Models Break

The contrast becomes clear when examining where current AI systems are fragile. Sutskever frequently illustrates this through the “jagged behaviour” problem: models that perform superhuman on benchmarks yet fail in elementary ways during real-world deployment.

A language model can score in the 88th percentile on the bar examination yet, when asked to debug code, introduces new errors whilst fixing previous ones. It cycles between mistakes even when provided clear feedback. It lacks the internal evaluative sense that tells a human programmer, “This approach is leading nowhere; I should try something different.” The model lacks robust value functions—internal signals that guide learning and action.

This fragility manifests across multiple dimensions:

  1. Distribution shift fragility: Models trained on one distribution of data often fail dramatically when confronted with data that differs from training distribution, even slightly. A vision system trained on images with certain lighting conditions fails on images with different lighting. A language model trained primarily on Western internet text struggles with cultural contexts it has not heavily encountered. Humans, by contrast, maintain competence across remarkable variation—different languages, accents, cultural contexts, lighting conditions, perspectives.
  2. Benchmark overfitting: Contemporary AI systems achieve extraordinary performance on carefully constructed evaluation tasks yet fail at the underlying capability the benchmark purports to measure. This occurs because models have been optimised (through reinforcement learning) specifically to perform well on benchmarks rather than to develop robust understanding. Sutskever has noted that this reward hacking is often unintentional—companies genuinely seeking to improve models inadvertently create RL environments that optimise for benchmark performance rather than genuine capability.
  3. Lack of principled abstraction: Models often memorise patterns rather than developing principled understanding. This manifests as inability to apply learned knowledge to genuinely novel contexts. A model may solve thousands of addition problems yet fail on a slightly different formulation it has not encountered. A human, having understood addition as a principle, applies it to any context where addition is relevant.
  4. Absence of internal feedback mechanisms: Current reinforcement learning typically provides feedback only at the end of long trajectories. A model can pursue 1,000 steps of reasoning down an unpromising path, only to receive a training signal after the trajectory completes. Humans, by contrast, possess continuous internal feedback—emotions, intuition, confidence levels—that signal whether reasoning is productive or should be redirected. This enables far more efficient learning.

The Value Function Hypothesis: Emotions as Robust Learning Machinery

Sutskever’s analysis points toward a crucial hypothesis: human robustness depends fundamentally on value functions—internal mechanisms that provide continuous, robust evaluation of states and actions.

In machine learning, a value function is a learned estimate of expected future reward or utility from a given state. In human neurobiology, value functions are implemented, Sutskever argues, through emotions and affective states. Fear signals danger. Confidence signals competence. Boredom signals that current activity is unproductive. Satisfaction signals that effort has succeeded. These emotional states, which evolution has refined over millions of years, serve as robust evaluative signals that guide learning and behaviour.

Sutskever illustrates this with a striking neurological case: a person who suffered brain damage affecting emotional processing. Despite retaining normal IQ, puzzle-solving ability, and articulate cognition, this person became radically incapable of making even trivial decisions. Choosing which socks to wear would take hours. Financial decisions became catastrophically poor. This person could think but could not effectively decide or act—suggesting that emotions (and the value functions they implement) are not peripheral to human cognition but absolutely central to effective agency.

What makes human value functions particularly robust is their simplicity and stability. They are not learned during a person’s lifetime through explicit training. They are evolved, hard-coded by billions of years of biological evolution into neural structures that remain remarkably consistent across human populations and contexts. A person experiences hunger, fear, social connection, and achievement similarly whether in ancient hunter-gatherer societies or modern industrial ones—because these value functions were shaped by evolutionary pressures that remained relatively stable.

This evolutionary hardcoding of value functions may be crucial to human learning robustness. Imagine trying to teach a child through explicit reward signals alone: “Do this task and receive points; optimise for points.” This would be inefficient and brittle. Instead, humans learn through value functions that are deeply embedded, emotionally weighted, and robust across contexts. A child learns to speak not through external reward optimisation but through intrinsic motivation—social connection, curiosity, the inherent satisfaction of communication. These motivations persist across contexts and enable robust learning.

Current AI systems largely lack this. They optimise for explicitly defined reward signals or benchmark metrics. These are fragile by comparison—vulnerable to reward hacking, overfitting, distribution shift, and the brittle transfer failures Sutskever observes.

Why This Matters Now: The Transition Point

Sutskever’s observation about human robustness arrives at a precise historical moment. As of November 2025, the AI industry is transitioning from what he terms the “age of scaling” (2020–2025) to what will be the “age of research” (2026 onward). This transition is driven by recognition that scaling alone is reaching diminishing returns. The next advances will require fundamental breakthroughs in understanding how to build systems that learn and adapt robustly—like humans do.

This creates an urgent research agenda: How do you build AI systems that possess human-like robustness? This is not a question that scales with compute or data. It is a research question—requiring new architectures, learning algorithms, training procedures, and conceptual frameworks.

Sutskever’s identification of robustness as the key distinguishing feature of human learning sets the research direction for the next phase of AI development. The question is not “how do we make bigger models” but “how do we build systems with value functions that enable efficient, self-correcting, context-robust learning?”

The Research Frontier: Leading Theorists Addressing Robustness

Antonio Damasio: The Somatic Marker Hypothesis

Antonio Damasio, neuroscientist at USC and authority on emotion and decision-making, has developed the somatic marker hypothesis—a framework explaining how emotions serve as rapid evaluative signals that guide decisions and learning. Damasio’s work provides neuroscientific grounding for Sutskever’s hypothesis that value functions (implemented as emotions) are central to effective agency. Damasio’s case studies of patients with emotional processing deficits closely parallel Sutskever’s neurological example—demonstrating that emotional value functions are prerequisites for robust, adaptive decision-making.

Judea Pearl: Causal Models and Robust Reasoning

Judea Pearl, pioneer in causal inference and probabilistic reasoning, has argued that correlation-based learning has fundamental limits and that robust generalisation requires learning causal structure—the underlying relationships between variables that remain stable across contexts. Pearl’s work suggests that human robustness derives partly from learning causal models rather than mere patterns. When a human understands how something works (causally), that understanding transfers to novel contexts. Current AI systems, lacking robust causal models, fail at transfer—a key component of robustness.

Karl Friston: The Free Energy Principle

Karl Friston, neuroscientist at University College London, has developed the free energy principle—a unified framework explaining how biological systems, including humans, maintain robustness by minimising prediction error and maintaining models of their environment and themselves. The principle suggests that what makes humans robust is not fixed programming but a general learning mechanism that continuously refines internal models to reduce surprise. This framework has profound implications for building robust AI: rather than optimising for external rewards, systems should optimise for maintaining accurate models of reality, enabling principled generalisation.

Stuart Russell: Learning Under Uncertainty and Value Alignment

Stuart Russell, UC Berkeley’s leading AI safety researcher, has emphasised that robust AI systems must remain genuinely uncertain about objectives and learn from interaction rather than operating under fixed goal specifications. Russell’s work suggests that rigidity about objectives makes systems fragile—vulnerable to reward hacking and context-specific failure. Robustness requires systems that maintain epistemic humility and adapt their understanding of what matters based on continued learning. This directly parallels how human value systems are robust: they are not brittle doctrines but evolving frameworks that integrate experience.

Demis Hassabis and DeepMind’s Continual Learning Research

Demis Hassabis, CEO of DeepMind, has invested substantial effort into systems that learn continuously from environmental interaction rather than through discrete offline training phases. DeepMind’s research on continual reinforcement learning, meta-learning, and adaptive systems reflects the insight that robustness emerges not from static pre-training but from ongoing interaction with environments—enabling systems to refine their models and value functions continuously. This parallels human learning, which is fundamentally continual rather than episodic.

Yann LeCun: Self-Supervised Learning and World Models

Yann LeCun, Meta’s Chief AI Scientist, has advocated for learning approaches that enable systems to build internal models of how the world works—what he terms world models—through self-supervised learning. LeCun argues that robust generalisation requires systems that understand causal structure and dynamics, not merely correlations. His work on self-supervised learning suggests that systems trained to predict and model their environments develop more robust representations than systems optimised for specific external tasks.

The Evolutionary Basis: Why Humans Have Robust Value Functions

Understanding human robustness requires appreciating why evolution equipped humans with sophisticated, stable value function systems.

For millions of years, humans and our ancestors faced fundamentally uncertain environments. The reward signals available—immediate sensory feedback, social acceptance, achievement, safety—needed to guide learning and behaviour across vast diversity of contexts. Evolution could not hard-code specific solutions for every possible situation. Instead, it encoded general-purpose value functions—emotions and motivational states—that would guide adaptive behaviour across contexts.

Consider fear. Fear is a robust value function signal that something is dangerous. This signal evolved in environments full of predators and hazards. Yet the same fear response that protected ancestral humans from predators also keeps modern humans safe from traffic, heights, and social rejection. The value function is robust because it operates on a general principle—danger—rather than specific memorised hazards.

Similarly, social connection, curiosity, achievement, and other human motivations evolved as general-purpose signals that, across millions of years, correlated with survival and reproduction. They remain remarkably stable across radically different modern contexts—different cultures, technologies, and social structures—because they operate at a level of abstraction robust to context change.

Current AI systems, by contrast, lack this evolutionary heritage. They are trained from scratch, often on specific tasks, with reward signals explicitly engineered for those tasks. These reward signals are fragile by comparison—vulnerable to distribution shift, overfitting, and context-specificity.

Implications for Safe AI Development

Sutskever’s emphasis on human robustness carries profound implications for safe AI development. Robust systems are safer systems. A system with genuine value functions—robust internal signals about what matters—is less vulnerable to reward hacking, specification gaming, or deployment failures. A system that learns continuously and maintains epistemic humility is more likely to remain aligned as its capabilities increase.

Conversely, current AI systems’ lack of robustness is dangerous. Systems optimised for narrow metrics can fail catastrophically when deployed in novel contexts. Systems lacking robust value functions cannot self-correct or maintain appropriate caution. Systems that cannot learn from deployment feedback remain brittle.

Building AI systems with human-like robustness is therefore not merely an efficiency question—though efficiency matters greatly. It is fundamentally a safety question. The development of robust value functions, continual learning capabilities, and general-purpose evaluative mechanisms is central to ensuring that advanced AI systems remain beneficial as they become more powerful.

The Research Direction: From Scaling to Robustness

Sutskever’s observation that “the robustness of people is really staggering” reorients the entire research agenda. The question is no longer primarily “how do we scale?” but “how do we build systems with robust value functions, efficient learning, and genuine adaptability across contexts?”

This requires:

  • Architectural innovation: New neural network structures that embed or can learn robust evaluative mechanisms—value functions analogous to human emotions.
  • Training methodology: Learning procedures that enable systems to develop genuine self-correction capabilities, learn from sparse feedback, and maintain robustness across distribution shift.
  • Theoretical understanding: Deeper mathematical and conceptual frameworks explaining what makes value functions robust and how to implement them in artificial systems.
  • Integration of findings from neuroscience, evolutionary biology, and decision theory: Drawing on multiple fields to understand the principles underlying human robustness and translating them into machine learning.

Conclusion: Robustness as the Frontier

When Sutskever identifies human robustness as “staggering,” he is not offering admiration but diagnosis. He is pointing out that current AI systems fundamentally lack what makes humans effective learners: robust value functions, efficient learning from sparse feedback, genuine self-correction, and adaptive generalisation across contexts.

The next era of AI research—the age of research beginning in 2026—will be defined largely by attempts to solve this problem. The organisation or research group that successfully builds AI systems with human-like robustness will not merely have achieved technical progress. They will have moved substantially closer to systems that learn efficiently, generalise reliably, and remain aligned to human values even as they become more capable.

Human robustness is not incidental. It is fundamental—the quality that makes human learning efficient, adaptive, and safe. Replicating it in artificial systems represents the frontier of AI research and development.

read more
Quote: Ilya Sutskever – Safe Superintelligence

Quote: Ilya Sutskever – Safe Superintelligence

“These models somehow just generalize dramatically worse than people. It’s super obvious. That seems like a very fundamental thing.” – Ilya Sutskever – Safe Superintelligence

Sutskever, as co-founder and Chief Scientist of Safe Superintelligence Inc. (SSI), has emerged as one of the most influential voices in AI strategy and research direction. His trajectory illustrates the depth of his authority: co-author of AlexNet (2012), the paper that ignited the deep learning revolution; Chief Scientist at OpenAI during the development of GPT-2 and GPT-3; and now directing a $3 billion research organisation explicitly committed to solving the generalisation problem rather than pursuing incremental scaling.

His assertion about generalisation deficiency is not rhetorical flourish. It represents a fundamental diagnostic claim about why current AI systems, despite superhuman performance on benchmarks, remain brittle, unreliable, and poorly suited to robust real-world deployment. Understanding this claim requires examining what generalisation actually means, why it matters, and what the gap between human and AI learning reveals about the future of artificial intelligence.

What Generalisation Means: Beyond Benchmark Performance

Generalisation, in machine learning, refers to the ability of a system to apply knowledge learned in one context to novel, unfamiliar contexts it has not explicitly encountered during training. A model that generalises well can transfer principles, patterns, and capabilities across domains. A model that generalises poorly becomes a brittle specialist—effective within narrow training distributions but fragile when confronted with variation, novelty, or real-world complexity.

The crisis Sutskever identifies is this: contemporary large language models and frontier AI systems achieve extraordinary performance on carefully curated evaluation tasks and benchmarks. GPT-4 scores in the 88th percentile of the bar exam. O1 solves competition mathematics problems at elite levels. Yet these same systems, when deployed into unconstrained real-world workflows, exhibit what Sutskever terms “jagged” behaviour—they repeat errors, introduce new bugs whilst fixing previous ones, cycle between mistakes even with clear corrective feedback, and fail in ways that suggest fundamentally incomplete understanding rather than mere data scarcity.

This paradox reveals a hidden truth: benchmark performance and deployment robustness are not tightly coupled. An AI system can memorise, pattern-match, and perform well on evaluation metrics whilst failing to develop the kind of flexible, transferable understanding that enables genuine competence.

The Sample Efficiency Question: Orders of Magnitude of Difference

Underlying the generalisation crisis is a more specific puzzle: sample efficiency. Why does it require vastly more training data for AI systems to achieve competence in a domain than it takes humans?

A human child learns to recognise objects through a few thousand exposures. Contemporary vision models require millions. A teenager learns to drive in approximately ten hours of practice; AI systems struggle to achieve equivalent robustness with orders of magnitude more training. A university student learns to code, write mathematically, and reason about abstract concepts—domains that did not exist during human evolutionary history—with remarkably few examples and little explicit feedback.

This disparity points to something fundamental: humans possess not merely better priors or more specialised knowledge, but better general-purpose learning machinery. The principle underlying human learning efficiency remains largely unexpressed in mathematical or computational terms. Current AI systems lack it.

Sutskever’s diagnostic claim is that this gap reflects not engineering immaturity or the need for more compute, but the absence of a conceptual breakthrough—a missing principle of how to build systems that learn as efficiently as humans do. The implication is stark: you cannot scale your way out of this problem. More data and more compute, applied to existing methodologies, will not solve it. The bottleneck is epistemic, not computational.

Why Current Models Fail at Generalisation: The Competitive Programming Analogy

Sutskever illustrates the generalisation problem through an instructive analogy. Imagine two competitive programmers:

Student A dedicates 10,000 hours to competitive programming. They memorise every algorithm, every proof technique, every problem pattern. They become exceptionally skilled within competitive programming itself—one of the very best.

Student B spends only 100 hours on competitive programming but develops deeper, more flexible understanding. They grasp underlying principles rather than memorising solutions.

When both pursue careers in software engineering, Student B typically outperforms Student A. Why? Because Student A has optimised for a narrow domain and lacks the flexible transfer of understanding that Student B developed through lighter but more principled engagement.

Current frontier AI models, in Sutskever’s assessment, resemble Student A. They are trained on enormous quantities of narrowly curated data—competitive programming problems, benchmark evaluation tasks, reinforcement learning environments explicitly designed to optimise for measurable performance. They have been “over-trained” on carefully optimised domains but lack the flexible, generalised understanding that enables robust performance in novel contexts.

This over-optimisation problem is compounded by a subtle but crucial factor: reinforcement learning optimisation targets. Companies designing RL training environments face substantial degrees of freedom in how to construct reward signals. Sutskever observes that there is often a systematic bias: RL environments are subtly shaped to ensure models perform well on public benchmarks at release time, creating a form of unintentional reward hacking where the system becomes highly tuned to evaluation metrics rather than genuinely robust to real-world variation.

The Deeper Problem: Pre-Training’s Limits and RL’s Inefficiency

The generalisation crisis reflects deeper structural issues within contemporary AI training paradigms.

Pre-training’s opacity: Large-scale language model pre-training—trained on internet text data—provides models with an enormous foundation of patterns. Yet the way models rely on this pre-training data is poorly understood. When a model fails, it is unclear whether the failure reflects insufficient statistical support in the training distribution or whether something more fundamental is missing. Pre-training provides scale but at the cost of reasoning about what has actually been learned.

RL’s inefficiency: Current reinforcement learning approaches provide training signals only at the end of long trajectories. If a model spends thousands of steps reasoning about a problem and arrives at a dead end, it receives no signal until the trajectory completes. This is computationally wasteful. A more efficient learning system would provide intermediate evaluative feedback—signals that say, “this direction of reasoning is unpromising; abandon it now rather than after 1,000 more steps.” Sutskever hypothesises that this intermediate feedback mechanism—what he terms a “value function” and what evolutionary biology has encoded as emotions—is crucial to sample-efficient learning.

The gap between how humans and current AI systems learn suggests that human learning operates on fundamentally different principles: continuous, intermediate evaluation; robust internal models of progress and performance; the ability to self-correct and redirect effort based on internal signals rather than external reward.

Generalisation as Proof of Concept: What Human Learning Reveals

A critical move in Sutskever’s argument is this: the fact that humans generalise vastly better than current AI systems is not merely an interesting curiosity—it is proof that better generalisation is achievable. The existence of human learners demonstrates, in principle, that a learning system can operate with orders of magnitude less data whilst maintaining superior robustness and transfer capability.

This reframes the research challenge. The question is no longer whether better generalisation is possible (humans prove it is) but rather what principle or mechanism underlies it. This principle could arise from:

  • Architectural innovations: new ways of structuring neural networks that embody better inductive biases for generalisation
  • Learning algorithms: different training procedures that more efficiently extract principles from limited data
  • Value function mechanisms: intermediate feedback systems that enable more efficient learning trajectories
  • Continual learning frameworks: systems that learn continuously from interaction rather than through discrete offline training phases

What matters is that Sutskever’s claim shifts the research agenda from “get more compute” to “discover the missing principle.”

The Strategic Implications: Why This Matters Now

Sutskever’s diagnosis, articulated in November 2025, arrives at a crucial moment. The AI industry has operated under the “age of scaling” paradigm since approximately 2020. During this period, the scaling laws discovered by OpenAI and others suggested a remarkably reliable relationship: larger models trained on more data with more compute reliably produced better performance.

This created a powerful strategic imperative: invest capital in compute, acquire data, build larger systems. The approach was low-risk from a research perspective because the outcome was relatively predictable. Companies could deploy enormous resources confident they would yield measurable returns.

By 2025, however, this model shows clear strain. Data is approaching finite limits. Computational resources, whilst vast, are not unlimited, and marginal returns diminish. Most importantly, the question has shifted: would 100 times more compute actually produce a qualitative transformation or merely incremental improvement? Sutskever’s answer is clear: the latter. This fundamentally reorients strategic thinking. If 100x scaling yields only incremental gains, the bottleneck is not compute but ideas. The competitive advantage belongs not to whoever can purchase the most GPUs but to whoever discovers the missing principle of generalisation.

Leading Theorists and Related Research Programs

Yann LeCun: World Models and Causal Learning

Yann LeCun, Meta’s Chief AI Scientist and a pioneer of deep learning, has long emphasized that current supervised learning approaches are fundamentally limited. His work on “world models”—internal representations that capture causal structure rather than mere correlation—points toward learning mechanisms that could enable better generalisation. LeCun’s argument is that humans learn causal models of how the world works, enabling robust generalisation because causal understanding is stable across contexts in a way that statistical correlation is not.

Geoffrey Hinton: Neuroscience-Inspired Learning

Geoffrey Hinton, recipient of the 2024 Nobel Prize in Physics for foundational deep learning work, has increasingly emphasized that neuroscience holds crucial clues for improving AI learning efficiency. His recent work on biological plausibility and learning mechanisms reflects conviction that important principles of how neural systems efficiently extract generalised understanding remain undiscovered. Hinton has expressed support for Sutskever’s research agenda, recognizing that the next frontier requires fundamental conceptual breakthroughs rather than incremental scaling.

Stuart Russell: Learning Under Uncertainty

Stuart Russell, UC Berkeley’s leading AI safety researcher, has articulated that robust AI alignment requires systems that remain genuinely uncertain about objectives and learn from interaction. This aligns with Sutskever’s emphasis on continual learning. Russell’s work highlights that systems designed to optimise fixed objectives without capacity for ongoing learning and adjustment tend to produce brittle, misaligned outcomes—a dynamic that improves when systems maintain epistemic humility and learn continuously.

Demis Hassabis and DeepMind’s Continual Learning Research

Demis Hassabis, CEO of DeepMind, has invested substantial research effort into systems that learn continually from environmental interaction rather than through discrete offline training phases. DeepMind’s work on continual reinforcement learning, meta-learning, and systems that adapt to new tasks reflects recognition that learning efficiency depends on how feedback is structured and integrated over time—not merely on total data quantity.

Judea Pearl: Causality and Abstraction

Judea Pearl, pioneering researcher in causal inference and probabilistic reasoning, has long argued that correlation-based learning has fundamental limits and that causal reasoning is necessary for genuine understanding and generalisation. His work on causal models and graphical representation of dependencies provides theoretical foundations for why systems that learn causal structure (rather than mere patterns) achieve better generalisation across domains.

The Research Agenda Going Forward

Sutskever’s claim that generalisation is the “very fundamental thing” reorients the entire research agenda. This shift has profound implications:

From scaling to methodology: Research emphasis moves from “how do we get more compute” to “what training procedures, architectural innovations, or learning algorithms enable human-like generalisation?”

From benchmarks to robustness: Evaluation shifts from benchmark performance to deployment reliability—how systems perform on novel, unconstrained tasks rather than carefully curated evaluations.

From monolithic pre-training to continual learning: The training paradigm shifts from discrete offline phases (pre-train, then RL, then deploy) toward systems that learn continuously from real-world interaction.

From scale as differentiator to ideas as differentiator: Competitive advantage in AI development becomes less about resource concentration and more about research insight—the organisation that discovers better generalisation principles gains asymmetric advantage.

The Deeper Question: What Humans Know That AI Doesn’t

Beneath Sutskever’s diagnostic claim lies a profound question: What do humans actually know about learning that AI systems don’t yet embody?

Humans learn efficiently because they:

  • Develop internal models of their own performance and progress (value functions)
  • Self-correct through continuous feedback rather than awaiting end-of-trajectory rewards
  • Transfer principles flexibly across domains rather than memorising domain-specific patterns
  • Learn from remarkably few examples through principled understanding rather than statistical averaging
  • Integrate feedback across time scales and contexts in ways that build robust, generalised knowledge

These capabilities do not require superhuman intelligence or extraordinary cognitive resources. A fifteen-year-old possesses them. Yet current AI systems, despite vastly larger parameter counts and more data, lack equivalent ability.

This gap is not accidental. It reflects that current AI development has optimised for the wrong targets—benchmark performance rather than genuine generalisation, scale rather than efficiency, memorisation rather than principled understanding. The next breakthrough requires not more of the same but fundamentally different approaches.

Conclusion: The Shift from Scaling to Discovery

Sutskever’s assertion that “these models somehow just generalize dramatically worse than people” is, at first glance, an observation of inadequacy. But reframed, it is actually a statement of profound optimism about what remains to be discovered. The fact that humans achieve vastly better generalisation proves that better generalisation is possible. The task ahead is not to accept poor generalisation as inevitable but to discover the principle that enables human-like learning efficiency.

This diagnostic shift—from “we need more compute” to “we need better understanding of generalisation”—represents the intellectual reorientation of AI research in 2025 and beyond. The age of scaling is ending not because scaling is impossible but because it has approached its productive limits. The age of research into fundamental learning principles is beginning. What emerges from this research agenda may prove far more consequential than any previous scaling increment.

read more
Quote: Ilya Sutskever – Safe Superintelligence

Quote: Ilya Sutskever – Safe Superintelligence

“Is the belief really, ‘Oh, it’s so big, but if you had 100x more, everything would be so different?’ It would be different, for sure. But is the belief that if you just 100x the scale, everything would be transformed? I don’t think that’s true. So it’s back to the age of research again, just with big computers.” – Ilya Sutskever – Safe Superintelligence

Ilya Sutskever stands as one of the most influential figures in modern artificial intelligence—a scientist whose work has fundamentally shaped the trajectory of deep learning over the past decade. As co-author of the seminal 2012 AlexNet paper, he helped catalyse the deep learning revolution that transformed machine vision and launched the contemporary AI era. His influence extends through his role as Chief Scientist at OpenAI, where he played a pivotal part in developing GPT-2 and GPT-3, the models that established large-scale language model pre-training as the dominant paradigm in AI research.

In late 2024, Sutskever departed OpenAI and co-founded Safe Superintelligence Inc. (SSI) alongside Daniel Gross and Daniel Levy, positioning the company as the world’s “first straight-shot SSI lab”—an organisation with a single focus: developing safe superintelligence without distraction from product development or revenue generation. The company has since raised $3 billion and reached a $32 billion valuation, reflecting investor confidence in Sutskever’s strategic vision and reputation.

The Context: The Exhaustion of Scaling

Sutskever’s quoted observation emerges from a moment of genuine inflection in AI development. For roughly five years—from 2020 to 2025—the AI industry operated under what he terms the “age of scaling.” This era was defined by a simple, powerful insight: that scaling pre-training data, computational resources, and model parameters yielded predictable improvements in model performance. Organisations could invest capital with low perceived risk, knowing that more compute plus more data plus larger models would reliably produce measurable gains.

This scaling paradigm was extraordinarily productive. It yielded GPT-3, GPT-4, and an entire generation of frontier models that demonstrated capabilities that astonished both researchers and the public. The logic was elegant: if you wanted better AI, you simply scaled the recipe. Sutskever himself was instrumental in validating this approach. The word “scaling” became conceptually magnetic, drawing resources, attention, and organisational focus toward a single axis of improvement.

Yet by 2024–2025, that era began showing clear signs of exhaustion. Data is finite—the amount of high-quality training material available on the internet is not infinite, and organisations are rapidly approaching meaningful constraints on pre-training data supply. Computational resources, whilst vast, are not unlimited, and the economic marginal returns on compute investment have become less obvious. Most critically, the empirical question has shifted: if current frontier labs have access to extraordinary computational resources, would 100 times more compute actually produce a qualitative transformation in capabilities, or merely incremental improvement?

Sutskever’s answer is direct: incremental, not transformative. This reframing is consequential because it redefines where the bottleneck actually lies. The constraint is no longer the ability to purchase more GPUs or accumulate more data. The constraint is ideas—novel technical approaches, new training methodologies, fundamentally different recipes for building AI systems.

The Jaggedness Problem: Theory Meeting Reality

One critical observation animates Sutskever’s thinking: a profound disconnect between benchmark performance and real-world robustness. Current models achieve superhuman performance on carefully constructed evaluation tasks—yet in deployment, they exhibit what Sutskever calls “jagged” behaviour. They repeat errors, introduce new bugs whilst fixing old ones, and cycle between mistakes even when given clear corrective feedback.

This apparent paradox suggests something deeper than mere data or compute insufficiency. It points to inadequate generalisation—the inability to transfer learning from narrow, benchmark-optimised domains into the messy complexity of real-world application. Sutskever frames this through an analogy: a competitive programmer who practises 10,000 hours on competition problems will be highly skilled within that narrow domain but often fails to transfer that knowledge flexibly to broader engineering challenges. Current models, in his assessment, resemble that hyper-specialised competitor rather than the flexible, adaptive learner.

The Core Insight: Generalisation Over Scale

The central thesis animating Sutskever’s work at SSI—and implicit in his quote—is that human-like generalisation and learning efficiency represent a fundamentally different ML principle than scaling, one that has not yet been discovered or operationalised within contemporary AI systems.

Humans learn with orders of magnitude less data than large models yet generalise far more robustly to novel contexts. A teenager learns to drive in roughly ten hours of practice; current AI systems struggle to acquire equivalent robustness with vastly more training data. This is not because humans possess specialised evolutionary priors for driving (a recent activity that evolution could not have optimized for); rather, it suggests humans employ a more general-purpose learning principle that contemporary AI has not yet captured.

Sutskever hypothesises that this principle is connected to what he terms “value functions”—internal mechanisms akin to emotions that provide continuous, intermediate feedback on actions and states, enabling more efficient learning than end-of-trajectory reward signals alone. Evolution appears to have hard-coded robust value functions—emotional and evaluative systems—that make humans viable, adaptive agents across radically different environments. Whether an equivalent principle can be extracted purely from pre-training data, rather than built into learning architecture, remains uncertain.

The Leading Theorists and Related Work

Yann LeCun and Data Efficiency

Yann LeCun, Meta’s Chief AI Scientist and a pioneer of deep learning, has long emphasised the importance of learning efficiency and the role of what he terms “world models” in understanding how agents learn causal structure from limited data. His work highlights that human vision achieves remarkable robustness from developmental data scarcity—children recognise cars after seeing far fewer exemplars than AI systems require—suggesting that the brain employs inductive biases or learning principles that current architectures lack.

Geoffrey Hinton and Neuroscience-Inspired AI

Geoffrey Hinton, winner of the 2024 Nobel Prize in Physics for his work on deep learning, has articulated concerns about AI safety and expressed support for Sutskever’s emphasis on fundamentally rethinking how AI systems learn and align. Hinton’s career-long emphasis on biologically plausible learning mechanisms—from Boltzmann machines to capsule networks—reflects a conviction that important principles for efficient learning remain undiscovered and that neuroscience offers crucial guidance.

Stuart Russell and Alignment Through Uncertainty

Stuart Russell, UC Berkeley’s leading AI safety researcher, has emphasised that robust AI alignment requires systems that remain genuinely uncertain about human values and continue learning from interaction, rather than attempting to encode fixed objectives. This aligns with Sutskever’s thesis that safe superintelligence requires continual learning in deployment rather than monolithic pre-training followed by fixed RL optimisation.

Demis Hassabis and Continual Learning

Demis Hassabis, CEO of DeepMind and a co-developer of AlphaGo, has invested significant research effort into systems that learn continually rather than through discrete training phases. This work recognises that biological intelligence fundamentally involves interaction with environments over time, generating diverse signals that guide learning—a principle SSI appears to be operationalising.

The Paradigm Shift: From Offline to Online Learning

Sutskever’s thinking reflects a broader intellectual shift visible across multiple frontiers of AI research. The dominant pre-training + RL framework assumes a clean separation: a model is trained offline on fixed data, then post-trained with reinforcement learning, then deployed. Increasingly, frontier researchers are questioning whether this separation reflects how learning should actually work.

His articulation of “age of research” signals a return to intellectual plurality and heterodox experimentation—the opposite of the monoculture that scaling paradigm created. When everyone is racing to scale the same recipe, innovation becomes incremental. When new recipes are required, diversity of approach becomes an asset rather than liability.

The Stakes and Implications

This reframing carries significant strategic implications. If the bottleneck is truly ideas rather than compute, then smaller, more cognitively coherent organisations with clear intellectual direction may outpace larger organisations constrained by product commitments, legacy systems, and organisational inertia. If the key innovation is a new training methodology—one that achieves human-like generalisation through different mechanisms—then the first organisation to discover and validate it may enjoy substantial competitive advantage, not through superior resources but through superior understanding.

Equally, this framing challenges the common assumption that AI capability is primarily a function of computational spend. If methodological innovation matters more than scale, the future of AI leadership becomes less a question of capital concentration and more a question of research insight—less about who can purchase the most GPUs, more about who can understand how learning actually works.

Sutskever’s quote thus represents not merely a rhetorical flourish but a fundamental reorientation of strategic thinking about AI development. The age of confident scaling is ending. The age of rigorous research into the principles of generalisation, sample efficiency, and robust learning has begun.

read more
Quote: Warren Buffet – Investor

Quote: Warren Buffet – Investor

“Never invest in a company without understanding its finances. The biggest losses in stocks come from companies with poor balance sheets.” – Warren Buffet – Investor

This statement encapsulates Warren Buffett’s foundational conviction that a thorough understanding of a company’s financial health is essential before any investment is made. Buffett, revered as one of the world’s most successful and influential investors, has built his career—and the fortunes of Berkshire Hathaway shareholders—by analysing company financials with forensic precision and prioritising robust balance sheets. A poor balance sheet typically signals overleveraging, weak cash flows, and vulnerability to adverse market cycles, all of which heighten the risk of capital loss.

Buffett’s approach can be traced directly to the principles of value investing: only purchase businesses trading below their intrinsic value, and rigorously avoid companies whose finances reveal underlying weakness. This discipline shields investors from the pitfalls of speculation and market fads. Paramount to this method is what Buffett calls a margin of safety—a buffer between a company’s market price and its real worth, aimed at mitigating downside risks, especially those stemming from fragile balance sheets. His preference for quality over quantity similarly reflects a bias towards investing larger sums in a select number of financially sound companies rather than spreading capital across numerous questionable prospects.

Throughout his career, Buffett has consistently advocated for investing only in businesses that one fully understands. He famously avoids complexity and “fashionable trends,” stating that clarity and financial strength supersede cleverness or hype. His guiding mantra to “never lose money,” and the prompt reminder “never forget the first rule,” further reinforces his risk-averse methodology.

Background on Warren Buffett

Born in 1930 in Omaha, Nebraska, Warren Buffett demonstrated an early fascination with business and investing. He operated as a stockbroker, bought and sold pinball machines, and eventually took over Berkshire Hathaway, transforming it from a struggling textile manufacturer into a global conglomerate. His stewardship is defined not only by outsized returns, but by a consistent, rational framework for capital allocation; he eschews speculation and prizes businesses with predictable earnings, capable leadership, and resilient competitive advantages. Buffett’s investment tenets, traced back to Benjamin Graham and refined with Charlie Munger, remain the benchmark for disciplined, risk-conscious investing.

Leading Theorists on Financial Analysis and Value Investing

The intellectual foundation of Buffett’s philosophy rests predominantly on the work of Benjamin Graham and, subsequently, David Dodd:

  • Benjamin Graham
    Often characterised as the “father of value investing,” Graham developed a rigorous framework for asset selection based on demonstrable financial solidity. His landmark work, The Intelligent Investor (1949), formalised the notion of intrinsic value, margin of safety, and the critical analysis of financial statements. Graham’s empirical, rules-based approach sought to remove emotion from investment decision-making, placing systematic, intensive financial review at the forefront.
  • David Dodd
    Co-author of Security Analysis with Graham, Dodd expanded and codified approaches for in-depth business valuation, championing comprehensive audit of balance sheets, income statements, and cash flow reports. The Graham-Dodd method remains the global standard for security analysis.
  • Charlie Munger
    Buffett’s long-time business partner, Charlie Munger, is credited with shaping the evolution from mere statistical bargains (“cigar butt” investing) towards businesses with enduring competitive advantage. Munger advocates a broadened mental toolkit (“worldly wisdom”) integrating qualitative insights—on management, culture, and durability—with rigorous financial vetting.
  • Peter Lynch
    Known for managing the Magellan Fund at Fidelity, Lynch famously encouraged investors to “know what you own,” reinforcing the necessity of understanding a business’s financial fibre before participation. He also stressed that the gravest investing errors stem from neglecting financial fundamentals, echoing Buffett’s caution on poor balance sheets.
  • John Bogle
    As the founder of Vanguard and inventor of the index fund, Bogle’s influence stems from his advocacy of broad diversification—but he also warned sharply against investing in companies without sound financial disclosure, because broad market risks are magnified in the presence of individual corporate failure.

Conclusion of Context

Buffett’s quote is not merely a rule-of-thumb—it expresses one of the most empirically validated truths in investment history: deep analysis of company finances is indispensable to avoiding catastrophic losses. The theorists who shaped this doctrine did so by instituting rigorous standards and repeatable frameworks that continue to underpin modern investment strategy. Buffett’s risk-averse, fundamentals-rooted vision stands as a beacon of prudence in an industry rife with speculation. His enduring message—understand the finances; invest only in quality—remains the starting point for both novice and veteran investors seeking resilience and sustainable wealth.

read more
Quote: Sam Walton – American retail pioneer

Quote: Sam Walton – American retail pioneer

“Great ideas come from everywhere if you just listen and look for them. You never know who’s going to have a great idea.” – Sam Walton – American retail pioneer

This quote epitomises Sam Walton’s core leadership principle—openness to ideas from all levels of an organisation. Walton, the founder of Walmart and Sam’s Club, was known for his relentless focus on operational efficiency, cost leadership, and, crucially, a culture that actively valued contributions from employees at every tier.

Walton’s approach stemmed from his own lived experience. Born in 1918 in rural Oklahoma, he grew up during the Great Depression—a time that instilled a profound respect for hard work and creative problem-solving. After service in the US Army, he managed a series of Ben Franklin variety stores. Denied the opportunity to pilot a new discount retail model by his franchisor, Walton struck out on his own, opening the first Walmart in Rogers, Arkansas in 1962, funded chiefly through personal risk and relentless ambition.

From the outset, Walton positioned himself as a learner—famously travelling across the United States to observe competitors and often spending time on the shop floor listening to the insights of front-line staff and customers. He believed valuable ideas could emerge from any source—cashiers, cleaners, managers, or suppliers—and his instinct was to capitalise on this collective intelligence.

His management style, shaped by humility and a drive to democratise innovation, helped Walmart scale from a single store to the world’s largest retailer by the early 1990s. The company’s relentless growth and robust internal culture were frequently attributed to Walton’s ability to source improvements and innovations bottom-up rather than solely relying on top-down direction.

About Sam Walton

Sam Walton (1918–1992) was an American retail pioneer who, from modest beginnings, changed global retailing. His vision for Walmart was centred on three guiding principles:

  • Offering low prices for everyday goods.
  • Maintaining empathetic customer service.
  • Cultivating a culture of shared ownership and continual improvement through employee engagement.

Despite his immense success and wealth, Walton was celebrated for his modesty—driving a used pickup, wearing simple clothes, and living in the same town where his first store opened. He ultimately built a business empire that, by 1992, encompassed over 2,000 stores and employed more than 380,000 people.

Leading Theorists Related to the Subject Matter

Walton’s quote and philosophy connect to three key schools of thought in innovation and management theory:

1. Peter Drucker
Peter Drucker, often called the father of modern management, advocated for management by walking around: leaders should remain closely connected to their organisations and use the intelligence of their workforce to inform decision-making. Drucker taught that innovation is an organisational discipline, not the exclusive preserve of senior leadership or R&D specialists.

2. Henry Chesbrough
Chesbrough developed the concept of open innovation, which posits that breakthrough ideas often originate outside a company’s traditional boundaries. He argued that organisations should purposefully encourage inflow and outflow of knowledge to accelerate innovation and create value, echoing Walton’s insistence that great ideas can (and should) come from anywhere.

3. Simon Sinek
In his influential work Start with Why, Sinek explores the notion that transformational leaders elicit deep engagement and innovative thinking by grounding teams in purpose (“Why”). Sinek identifies that companies weld innovation into their DNA when leaders empower all employees to contribute to improvement and strategic direction.

Theorist
Core Idea
Relevance to Walton’s Approach
Peter Drucker
Management by walking around; broad-based engagement
Walton’s direct engagement with staff
Henry Chesbrough
Open innovation; ideas flow in and out of the organisation
Walton’s receptivity beyond hierarchy
Simon Sinek
Purpose-based leadership for innovation and loyalty
Walton’s mission-driven, inclusive ethos

Additional Relevant Thinkers and Concepts

  • Clayton Christensen: In The Innovator’s Dilemma, he highlights the role of disruptive innovation which is frequently initiated by those closest to the customer or the front line, not at the corporate pinnacle.
  • Eric Ries: In The Lean Startup, Ries argues it is the fast feedback and agile learning from the ground up that enables organisations to innovate ahead of competitors—a direct parallel to Walton’s method of sourcing and testing ideas rapidly in store environments.

Sam Walton’s lasting impact is not just Walmart’s size, but the conviction that listening widely—to employees, customers, and the broader community—unlocks the innovations that fuel lasting competitive advantage. This belief is increasingly echoed in modern leadership thinking and remains foundational for organisations hoping to thrive in a fast-changing world.

read more
Quote: Dr Eric Schmidt – Ex-Google CEO

Quote: Dr Eric Schmidt – Ex-Google CEO

“The win will be teaming between a human and their judgment and a supercomputer and what it can think.” – Dr Eric Schmidt – Former Google CEO

Dr Eric Schmidt is recognised globally as a principal architect of the modern digital era. He served as CEO of Google from 2001 to 2011, guiding its evolution from a fast-growing startup into a cornerstone of the tech industry. His leadership was instrumental in scaling Google’s infrastructure, accelerating product innovation, and instilling a model of data-driven culture that underpins contemporary algorithms and search technologies. After stepping down as CEO, Schmidt remained pivotal as Executive Chairman and later as Technical Advisor, shepherding Google’s transition to Alphabet and advocating for long-term strategic initiatives in AI and global connectivity.

Schmidt’s influence extends well beyond corporate leadership. He has played policy-shaping roles at the highest levels, including chairing the US National Security Commission on Artificial Intelligence and advising multiple governments on technology strategy. His career is marked by a commitment to both technical progress and the responsible governance of innovation, positioning him at the centre of debates on AI’s promises, perils, and the necessity of human agency in the face of accelerating machine intelligence.

Context of the Quotation: Human–AI Teaming

Schmidt’s statement emerged during high-level discussions about the trajectory of AI, particularly in the context of autonomous systems, advanced agents, and the potential arrival of superintelligent machines. Rather than portraying AI as a force destined to replace humans, Schmidt advocates a model wherein the greatest advantage arises from joint endeavour: humans bring creativity, ethical discernment, and contextual understanding, while supercomputers offer vast capacity for analysis, pattern recognition, and iterative reasoning.

This principle is visible in contemporary AI deployments. For example:

  • In drug discovery, AI systems can screen millions of molecular variants in a day, but strategic insights and hypothesis generation depend on human researchers.
  • In clinical decision-making, AI augments the observational scope of physicians—offering rapid, precise diagnoses—but human judgement is essential for nuanced cases and values-driven choices.
  • Schmidt points to future scenarios where “AI agents” conduct scientific research, write code by natural-language command, and collaborate across domains, yet require human partnership to set objectives, interpret outcomes, and provide oversight.
  • He underscores that autonomous AI agents, while powerful, must remain under human supervision, especially as they begin to develop their own procedures and potentially opaque modes of communication.

Underlying this vision is a recognition: AI is a multiplier, not a replacement, and the best outcomes will couple human judgement with machine cognition.

Relevant Leading Theorists and Critical Backstory

This philosophy of human–AI teaming aligns with and is actively debated by several leading theorists:

  • Stuart Russell
    Professor at UC Berkeley, Russell is renowned for his work on human-compatible AI. He contends that the long-term viability of artificial intelligence requires that systems are designed to understand and comply with human preferences and values. Russell has championed the view that human oversight and interpretability are non-negotiable as intelligence systems become more capable and autonomous.
  • Fei-Fei Li
    Stanford Professor and co-founder of AI4ALL, Fei-Fei Li is a major advocate for “human-centred AI.” Her research highlights that AI should augment human potential, not supplant it, and she stresses the critical importance of interdisciplinary collaboration. She is a proponent of AI systems that foster creativity, support decision-making, and preserve agency and dignity.
  • Demis Hassabis
    Founder and CEO of DeepMind, Hassabis’s group famously developed AlphaGo and AlphaFold. DeepMind’s work demonstrates the principle of human–machine teaming: AI systems solve previously intractable problems, such as protein folding, that can only be understood and validated with strong human scientific context.
  • Gary Marcus
    A prominent AI critic and academic, Marcus warns against overestimating current AI’s capacity for judgment and abstraction. He pursues hybrid models where symbolic reasoning and statistical learning are paired with human input to overcome the limitations of “black-box” models.
  • Eric Schmidt’s own contributions reflect active engagement with these paradigms, from his advocacy for AI regulatory frameworks to public warnings about the risks of unsupervised AI, including “unplugging” AI systems that operate beyond human understanding or control.

Structural Forces and Implications

Schmidt’s perspective is informed by several notable trends:

  • Expansion of infinite context windows: Models can now process millions of words and reason through intricate problems with humans guiding multi-step solutions, a paradigm shift for fields like climate research, pharmaceuticals, and engineering.
  • Proliferation of autonomous agents: AI agents capable of learning, experimenting, and collaborating independently across complex domains are rapidly becoming central; their effectiveness maximised when humans set goals and interpret results.
  • Democratisation paired with concentration of power: As AI accelerates innovation, the risk of centralised control emerges; Schmidt calls for international cooperation and proactive governance to keep objectives aligned with human interests.
  • Chain-of-thought reasoning and explainability: Advanced models can simulate extended problem-solving, but meaningful solutions depend on human guidance, interpretation, and critical thinking.

Summary

Eric Schmidt’s quote sits at the intersection of optimistic technological vision and pragmatic governance. It reflects decades of strategic engagement with digital transformation, and echoes leading theorists’ consensus: the future of AI is collaborative, and its greatest promise lies in amplifying human judgment with unprecedented computational support. Realising this future will depend on clear policies, interdisciplinary partnership, and an unwavering commitment to ensuring technology remains a tool for human advancement—and not an unfettered automaton beyond our reach.

read more
Quote: Dr. Fei-Fei Li – Stanford Professor – world-renowned authority in artificial intelligence

Quote: Dr. Fei-Fei Li – Stanford Professor – world-renowned authority in artificial intelligence

“I do think countries all should invest in their own human capital, invest in partnerships and invest in their own technological stack as well as the business ecosystem… I think not investing in AI would be macroscopically the wrong thing to do.” – Dr. Fei-Fei Li – Stanford Professor – world-renowned authority in artificial intelligence

The statement was delivered during a high-stakes panel discussion on artificial superintelligence, convened at the Future Investment Initiative in Riyadh, where nation-state leaders, technologists, and investors gathered to assess their strategic positioning in the emerging AI era. Her words strike at the heart of a dilemma facing governments worldwide: how to build national AI capabilities whilst avoiding the trap of isolationism, and why inaction would be economically and strategically untenable.

Context: The Geopolitical Stakes of AI Investment

The Historical Moment

Dr. Li’s statement comes at a critical juncture. By late 2024 and into 2025, artificial intelligence had transitioned from speculative technology to demonstrable economic driver. Estimates suggested AI could generate between $15 trillion and $20 trillion in economic value globally by 2030—a figure larger than the current gross domestic product of most nations. This windfall is not distributed evenly; rather, it concentrates among early movers with capital, infrastructure, and talent. The race is on, and the stakes are existential for national competitiveness, employment, and geopolitical influence.

In this landscape, a nation that fails to invest in AI capabilities risks profound economic displacement. Yet Dr. Li is equally clear: isolation is counterproductive. The most realistic path forward combines three pillars:

  • Human Capital: The talent to conceive, build, and deploy AI systems
  • Partnerships: Strategic alliances, particularly with leading technological ecosystems (the US hyperscalers, for instance)
  • Domestic Technological Infrastructure: The local research bases, venture capital, regulatory frameworks, and business ecosystems that enable sustained innovation

This is not a counsel of surrender to Silicon Valley hegemony, but rather a sophisticated argument about comparative advantage and integration within global technological networks.

Dr. Fei-Fei Li: The Person and Her Arc

Early Life and Foundational Values

Dr. Fei-Fei Li’s perspective is shaped by her personal trajectory. Born in Chengdu, China, she emigrated to the United States at age fifteen, settling in New Jersey where her parents ran a small business. This background infuses her thinking: she understands both the promise of technological mobility and the structural barriers that constrain developing economies. She obtained her undergraduate degree in physics from Princeton University in 1999, with high honours, before pursuing doctoral studies at the California Institute of Technology, where she worked across computer science, electrical engineering, and cognitive neuroscience, earning her PhD in 2005.

The ImageNet Revolution

In 2007, whilst at Princeton, Dr. Li embarked on a project that would reshape artificial intelligence. Observing that cognitive psychologist Irving Biederman estimated humans recognise approximately 30,000 object categories, Li conceived ImageNet: a massive, hierarchically organised visual database. Colleagues dismissed the scale as impractical. Undeterred, she led a team (including Princeton professors Jia Deng, Kai Li, and Wei Dong) that leveraged Amazon Mechanical Turk to label over 14 million images across 22,000 categories.

By 2009, ImageNet was published. More critically, the team created the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), an annual competition that invited researchers worldwide to develop algorithms for image classification. This contest became the crucible in which modern deep learning was forged. When Geoffrey Hinton’s group achieved a breakthrough using convolutional neural networks in 2012, winning the competition by a decisive margin, the deep learning revolution was catalysed. ImageNet is now recognised as one of the three foundational forces in the birth of modern AI.

What is instructive here is that Dr. Li’s contribution was not merely technical but infrastructural: she created a shared resource that democratised AI research globally. Academic groups from universities across continents—not just Silicon Valley—could compete on equal footing. This sensibility—that progress depends on enabling distributed talent—runs through her subsequent work.

Career Architecture and Strategic Leadership

Following her Princeton years, Dr. Li joined Stanford University in 2009, eventually becoming the Sequoia Capital Professor of Computer Science—a title of singular prestige. From 2013 to 2018, she directed Stanford’s Artificial Intelligence Lab (SAIL), one of the world’s premier research institutes. Her publications exceed 400 papers in top-tier venues, and she remains one of the most cited computer scientists of her generation.

During a sabbatical from Stanford (January 2017 to September 2018), Dr. Li served as Vice President and Chief Scientist of AI/ML at Google Cloud. Her mandate was to democratise AI technology, lowering barriers for businesses and developers—work that included advancing products like AutoML, which enabled organisations without deep AI expertise to deploy machine learning systems.

Upon returning to Stanford in 2019, she became the founding co-director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI), an explicitly multidisciplinary initiative spanning computer science, social sciences, humanities, law, and medicine—all united by the conviction that AI must serve human flourishing, not vice versa.

Current Work and World Labs

Most recently, Dr. Li co-founded and serves as chief executive officer of World Labs, an AI company focused on spatial intelligence and generative world models. This venture extends her intellectual agenda: if large language models learn patterns over text, world models learn patterns over 3D environments, enabling machines to understand, simulate, and reason about physical and virtual spaces. For robotics, healthcare simulation, autonomous systems, and countless other domains, this represents the next frontier.

Recognition and Influence

Her standing is reflected in numerous accolades: election to the National Academy of Engineering, the National Academy of Medicine (2020), and the American Academy of Arts and Sciences (2021); the Intel Lifetime Achievement Innovation Award in 2023; and inclusion in Time magazine’s 100 Most Influential People in AI. She is colloquially known as the “Godmother of AI.” In 2023, she published a memoir, The Worlds I See: Curiosity, Exploration and Discovery at the Dawn of AI, which chronicles her personal journey and intellectual evolution.

Leading Theorists and Strategic Thinkers: The Landscape of AI and National Strategy

The backdrop to Dr. Li’s statement includes several strands of thought about technology, development, and national strategy:

Economic and Technological Diffusion

  • Erik Brynjolfsson and Andrew McAfee (The Second Machine Age, Machine Platform Crowd): These MIT researchers have articulated how technological revolutions create winners and losers, and that policy choices—not technology alone—determine whether gains are broadly shared. They underscore that without intentional intervention, automation and AI tend to concentrate wealth and opportunity.
  • Dani Rodrik (Harvard economist): Rodrik’s work on “premature demonetarisation” and structural transformation highlights the risks faced by developing economies when technological progress accelerates faster than institutions can adapt. His analysis supports Dr. Li’s argument: countries must actively build capacity or risk being left behind.
  • Mariana Mazzucato (University College London): Mazzucato’s research on the entrepreneurial state emphasises that breakthrough innovations—including AI—depend on public investment in foundational research, education, and infrastructure. Her work buttresses the case for public and private sector partnership.

Artificial Intelligence and Cognition

  • Geoffrey Hinton, Yann LeCun, and Yoshua Bengio: The triumvirate of deep learning pioneers recognised that neural networks could scale to superhuman performance in perception and pattern recognition, yet have increasingly stressed that current approaches may be insufficient for general intelligence. Their candour about limitations supports a measured, long-term investment view.
  • Stuart Russell (UC Berkeley): Russell has been a prominent voice calling for AI safety and governance frameworks to accompany capability development. His framing aligns with Dr. Li’s insistence that human-centred values must guide AI research and deployment.

Geopolitics and Technology Competition

  • Michael Mazarr (RAND Corporation): Mazarr and colleagues have analysed great-power competition in emerging technologies, emphasising that diffusion of capability is inevitable but the pace and terms of diffusion are contestable. Nations that invest in talent pipelines and partnerships will sustain influence; those that isolate will atrophy.
  • Kai-Fu Lee: The Taiwanese-American venture capitalist and author (AI Superpowers) has articulated how the US and China are in a competitive race, but also how smaller nations and regions can position themselves through strategic partnerships and focus on applied AI problems relevant to their economies.
  • Eric Schmidt (former Google CEO): Schmidt, who participated in the same FII panel as Dr. Li, has emphasised that geopolitical advantage flows to nations with capital markets, advanced chip fabrication (such as Taiwan’s TSMC), and deep talent pools. Yet he has also highlighted pathways for other nations to benefit through partnerships and focused investment in particular domains.

Human-Centred Technology and Inclusive Growth

  • Timnit Gebru and Joy Buolamwini: These AI ethics researchers have exposed how AI systems can perpetuate bias and harm marginalised communities. Their work reinforces Dr. Li’s emphasis on human-centred design and inclusive governance. For developing nations, this implies that AI investment must account for local contexts, values, and risks of exclusion.
  • Turing Award recipients and foundational figures (such as Barbara Liskov on systems reliability, and Leslie Valiant on learning theory): Their sustained emphasis on rigour, safety, and verifiability underpins the argument that sustainable AI development requires not just speed but also deep technical foundations—something that human capital investment cultivates.

Development Economics and Technology Transfer

  • Paul Romer (Nobel laureate): Romer’s work on endogenous growth emphasises that ideas and innovation are the drivers of long-term prosperity. For developing nations, this implies that investment in research capacity, education, and institutional learning—not merely adopting foreign technologies—is essential.
  • Ha-Joon Chang: The heterodox development economist has critiqued narratives of “leapfrogging” technology. His argument suggests that nations building indigenous technological ecosystems—through domestic investment in research, venture capital, and entrepreneurship—are more resilient and capable of adapting innovations to local needs.

The Three Pillars: An Unpacking

Dr. Li’s framework is sophisticated precisely because it avoids two traps: technological nationalism (the fantasy that any nation can independently build world-leading AI from scratch) and technological fatalism (the resignation that small and medium-sized economies cannot compete).

Human Capital

The most portable, scalable asset a nation can develop is talent. This encompasses:

  • Education pipelines: From primary through tertiary education, with emphasis on mathematics, computer science, and critical thinking
  • Research institutions: Universities, national laboratories, and research councils capable of contributing to fundamental and applied AI knowledge
  • Retention and diaspora engagement: Policies to keep talented individuals from emigrating, and mechanisms to attract expatriate expertise
  • Diversity and inclusion: As Dr. Li has emphasised through her co-founding of AI4ALL (a nonprofit working to increase diversity in AI), innovation benefits from diverse perspectives and draws from broader talent pools

Partnerships

Rather than isolating, Dr. Li advocates for strategic alignment:

  • North-South partnerships: Developed nations’ hyperscalers and technology firms partnering with developing economies to establish data centres, training programmes, and applied research initiatives. Saudi Arabia and the UAE have pursued this model successfully
  • South-South cooperation: Peer learning and knowledge exchange among developing nations facing similar challenges
  • Academic and research collaborations: Open-source tools, shared benchmarks (as exemplified by ImageNet), and collaborative research that diffuse capability globally
  • Technology licensing and transfer agreements: Mechanisms by which developing nations can access cutting-edge tools and methods at affordable terms

Technological Stack and Ecosystem

A nation cannot simply purchase AI capability; it must develop home-grown institutional and commercial ecosystems:

  • Open-source communities: Participation in and contribution to open-source AI frameworks (PyTorch, TensorFlow, Hugging Face) builds local expertise and reduces dependency on proprietary systems
  • Venture capital and startup ecosystems: Policies fostering entrepreneurship in AI applications suited to local economies (agriculture, healthcare, manufacturing)
  • Regulatory frameworks: Balanced approaches to data governance, privacy, and AI safety that neither stifle innovation nor endanger citizens
  • Domain-specific applied AI: Rather than competing globally in large language models, nations can focus on AI applications addressing pressing local challenges: medical diagnostics, precision agriculture, supply-chain optimisation, or financial inclusion

Why Inaction Is “Macroscopically the Wrong Thing”

Dr. Li’s assertion that not investing in AI would be fundamentally mistaken rests on several converging arguments:

Economic Imperatives

AI is reshaping productivity across sectors. Nations that fail to develop internal expertise will find themselves dependent on foreign technology, unable to adapt solutions to local contexts, and vulnerable to supply disruptions or geopolitical pressure. The competitive advantage flows to early movers and sustained investors.

Employment and Social Cohesion

While AI will displace some jobs, it will create others—particularly for workers skilled in AI-adjacent fields. Nations that invest in reskilling and education can harness these transitions productively. Those that do not risk deepening inequality and social fracture.

Sovereignty and Resilience

Over-reliance on foreign AI systems limits national agency. Whether in healthcare, defence, finance, or public administration, critical systems should rest partly on domestic expertise and infrastructure to ensure resilience and alignment with national values.

Participation in Global Governance

As AI governance frameworks emerge—whether through the UN, regional bodies, or multilateral forums—nations with substantive technical expertise and domestic stakes will shape the rules. Those without will have rules imposed upon them.

The Tension and Its Resolution

Implicit in Dr. Li’s statement is a tension worth articulating: the world cannot support 200 competing AI superpowers, each building independent foundational models. Capital and talent are finite. Yet neither is the world a binary of a few AI leaders and many followers. The resolution lies in specialisation and integration:

  • A nation may not lead in large language models but excel in robotics for agriculture
  • It may not build chips but pioneer AI applications in healthcare or education
  • It may not host hyperscaler data centres but contribute essential research in AI safety or fairness
  • It will necessarily depend on global partnerships whilst developing sovereign capacity in domains critical to its citizens

This is neither capitulation nor isolation, but rather a mature acceptance of global interdependence coupled with strategic autonomy in domains of national importance.

Conclusion: The Compass for National Strategy

Dr. Li’s counsel, grounded in decades of research leadership, industrial experience, and global perspective, offers a compass for policymakers navigating the AI era. Investment in human capital, strategic partnerships, and home-grown technological ecosystems is not a luxury or academic exercise—it is fundamental to national competitiveness, prosperity, and agency. The alternative—treating AI as an external force to be passively absorbed—is indeed “macroscopically” mistaken, foreclosing decades of economic opportunity and surrendering the right to shape how this powerful technology serves human flourishing.

read more
Quote: Dr. Fei-Fei Li – Stanford Professor – world-renowned authority in artificial intelligence

Quote: Dr. Fei-Fei Li – Stanford Professor – world-renowned authority in artificial intelligence

“I think robotics has a long way to go… I think the ability, the dexterity of human-level manipulation is something we have to wait a lot longer to get. ” – Dr. Fei-Fei Li – Stanford Professor – world-renowned authority in artificial intelligence

While AI has made dramatic progress in perception and reasoning, the physical manipulation and dexterity seen in human hands is far from being matched by machines.

Context of the Quote: The State and Limitations of Robotics

Dr. Li’s comment was made against the backdrop of accelerating investment and hype in artificial intelligence and robotics. While AI systems now master complex games, interpret medical scans, and facilitate large-scale automation, the field of robotics—especially with respect to dexterous manipulation and embodied interaction in the real world—remains restricted by hardware limitations, incomplete world models, and a lack of general adaptability.

  • Human dexterity involves fine motor control, real-time feedback, and a deep understanding of spatial and causal relationships. As Dr. Li emphasises, current robots struggle with tasks that are mundane for humans: folding laundry, pouring liquids, assembling diverse objects, or improvising repairs in unpredictable environments.
  • Even state-of-the-art robot arms and hands, controlled by advanced machine learning, manage select tasks in highly structured settings. Scaling to unconstrained, everyday environments has proven exceedingly difficult.
  • The launch of benchmarks such as the BEHAVIOR Challenge by Stanford, led by Dr. Li’s group, is a direct response to these limitations. The challenge simulates 1,000 everyday tasks across varied household environments, aiming to catalyse progress by publicly measuring how far the field is from truly general-purpose, dexterous robots.

Dr. Fei-Fei Li: Biography and Impact

Dr. Fei-Fei Li is a world-renowned authority in artificial intelligence, best known for foundational contributions to computer vision and the promotion of “human-centred AI”. Her career spans:

  • Academic Leadership: Professor of Computer Science at Stanford University; founding co-director of the Stanford Institute for Human-Centered AI (HAI).
  • ImageNet: Li created the ImageNet dataset, which transformed machine perception by enabling deep neural networks to outperform previous benchmarks and catalysed the modern AI revolution. This advance shaped progress in visual recognition, autonomous systems, and accessibility technologies.
  • Human-Centred Focus: Dr. Li is recognised for steering the field towards responsible, inclusive, and ethical AI, ensuring research aligns with societal needs and multidisciplinary perspectives.
  • Spatial Intelligence and Embodied AI: A core strand of her current work is in spatial intelligence—teaching machines to understand, reason about, and interact with the physical world with flexibility and safety. Her venture World Labs is pioneering this next frontier, aiming to bridge the gap from words to worlds.
  • Recognition: She was awarded the Queen Elizabeth Prize for Engineering in 2025—alongside fellow AI visionaries—honouring transformative contributions to computing, perception, and human-centred innovation.
  • Advocacy: Her advocacy spans diversity, education, and AI governance. She actively pushes for multidisciplinary, transparent approaches to technology that are supportive of human flourishing.

Theoretical Foundations and Leading Figures in Robotic Dexterity

The quest for human-level dexterity in machines draws on several fields—robotics, neuroscience, machine learning—and builds on the insights of leading theorists:

Name
Contributions
Relevance to Dexterity Problem
Rodney Brooks
Developed subsumption architecture for mobile robots; founded iRobot and Rethink Robotics
Emphasised embodied intelligence: physical interaction is central; argued autonomous robots must learn in the real world and adapt to uncertainty.
Yoshua Bengio, Geoffrey Hinton, Yann LeCun
Deep learning pioneers; applied neural networks to perception
Led the transformation in visual perception and sensorimotor learning; current work extends to robotic learning but recognises that perception alone is insufficient for dexterity.
Pieter Abbeel
Expert in reinforcement learning and robotics (UC Berkeley)
Advanced algorithms for robotic manipulation, learning from demonstration, and real-world transfer; candid about the gulf between lab demonstrations and robust household robots.
Jean Ponce, Dieter Fox, Ken Goldberg
Leading researchers in computer vision and robot manipulation
Developed grasping algorithms and modelling for manipulation, but acknowledge that even “solved” tasks in simulation often fail in the unpredictable real world.
Dr. Fei-Fei Li
Computer vision, spatial intelligence, embodied AI
Argues spatial understanding and physical intelligence are critical, and that world models must integrate perception, action, and context to approach human-level dexterity.
Demis Hassabis
DeepMind CEO; led breakthroughs in deep reinforcement learning
AlphaZero and related systems have shown narrow superhuman performance, but the physical control and manipulation necessary for robotics remains unsolved.
Chris Atkeson
Humanoid and soft robotics pioneer
Developed advanced dexterous hands and whole-body motion, but highlights the vast gap between the best machines and human adaptability.

The Challenge: Why Robotics Remains “a Long Way to Go”

  • Embodiment: Unlike pure software, robots operate under real-world physical constraints. Variability in object geometry, materials, lighting, and external force must be mastered for consistent human-like manipulation.
  • Generalisation: A robot that succeeds at one task often fails catastrophically at another, even if superficially similar. Human hands, with sensory feedback and innate flexibility, effortlessly adapt.
  • World Modelling: Spatial intelligence—anticipating the consequences of actions, integrating visual, tactile, and proprioceptive data—is still largely unsolved. As Dr. Li notes, machines must “understand, navigate, and interact” with complex, dynamic environments.
  • Benchmarks and Community Efforts: The BEHAVIOR Challenge and open-source simulators aim to provide transparent, rigorous measurement and accelerate community progress, but there is consensus that true general dexterity is likely years—if not decades—away.

Conclusion: Where Theory Meets Practice

While AI and robotics have delivered astonishing advances in perception, narrowly focused automation, and simulation, the dexterity, adaptability, and common-sense reasoning required for robust, human-level robotic manipulation remain an unsolved grand challenge. Dr. Fei-Fei Li’s work and leadership define the state of the art—and set the aspirational vision for the next wave: embodied, spatially conscious AI, built with a profound respect for the complexity of human life and capability. Those who follow in her footsteps, across academia and industry, measure their progress not against hype or isolated demonstrations, but against the demanding reality of everyday human tasks.

read more
Quote: Dr. Fei-Fei Li – Stanford Professor – world-renowned authority in artificial intelligence

Quote: Dr. Fei-Fei Li – Stanford Professor – world-renowned authority in artificial intelligence

“That ability that humans have, it’s the combination of creativity and abstraction. I do not see today’s AI or tomorrow’s AI being able to do that yet.” – Dr. Fei-Fei Li – Stanford Professor – world-renowned authority in artificial intelligence

Dr. Li’s statement came amid wide speculation about the near-term prospects for artificial general intelligence (AGI) and superintelligence. While current AI already exceeds human capacity in specific domains (such as language translation, memory recall, and vast-scale data analysis), Dr. Li draws a line at creative abstraction—the human ability to form new concepts and theories that radically change our understanding of the world. She underscores that, despite immense data and computational resources, AI does not demonstrate the generative leap that allowed Newton to discover classical mechanics or Einstein to reshape physics with relativity. Dr. Li insists that, absent fundamental conceptual breakthroughs, neither today’s nor tomorrow’s AI can replicate this synthesis of creativity and abstract reasoning.

About Dr. Fei-Fei Li

Dr. Fei-Fei Li holds the title of Sequoia Capital Professor of Computer Science at Stanford University and is a world-renowned authority in artificial intelligence, particularly in computer vision and human-centric AI. She is best known for creating ImageNet, the dataset that triggered the deep learning revolution in computer vision—a cornerstone of modern AI systems. As the founding co-director of Stanford’s Institute for Human-Centered Artificial Intelligence (HAI), Dr. Li has consistently championed the need for AI that advances, rather than diminishes, human dignity and agency. Her research, with over 400 scientific publications, has pioneered new frontiers in machine learning, neuroscience, and their intersection.

Her leadership extends beyond academia: she served as chief scientist of AI/ML at Google Cloud, sits on international boards, and is deeply engaged in policy, notably as a special adviser to the UN. Dr. Li is acclaimed for her advocacy in AI ethics and diversity, notably co-founding AI4ALL, a non-profit enabling broader participation in the AI field. Often described as the “godmother of AI,” she is an elected member of the US National Academy of Engineering and the National Academy of Medicine. Her personal journey—from emigrating from Chengdu, China, to supporting her parents’ small business in New Jersey, to her trailblazing career—is detailed in her acclaimed 2023 memoir, The Worlds I See.

Remarks on Creativity, Abstraction, and AI: Theoretical Roots

The distinction Li draws—between algorithmic pattern-matching and genuine creative abstraction—addresses a foundational question in AI: What constitutes intelligence, and is it replicable in machines? This theme resonates through the works of several canonical theorists:

  • Alan Turing (1912–1954): Regarded as the father of computer science, Turing posed the question of machine intelligence in his pivotal 1950 paper, “Computing Machinery and Intelligence”. He proposed what we call the Turing Test: if a machine could converse indistinguishably from a human, could it be deemed intelligent? Turing hinted at the limits but also the theoretical possibility of machine abstraction.
  • Herbert Simon and Allen Newell: Pioneers of early “symbolic AI”, Simon and Newell framed intelligence as symbol manipulation; their experiments (the Logic Theorist and General Problem Solver) made some progress in abstract reasoning but found creative leaps elusive.
  • Marvin Minsky (1927–2016): Co-founder of the MIT AI Lab, Minsky believed creativity could in principle be mechanised, but anticipated it would require complex architectures that integrate many types of knowledge. His work, especially The Society of Mind, remained vital but speculative.
  • John McCarthy (1927–2011): While he named the field “artificial intelligence” and developed the LISP programming language, McCarthy was cautious about claims of broad machine creativity, viewing abstraction as an open challenge.
  • Geoffrey Hinton, Yann LeCun, Yoshua Bengio: Fathers of deep learning, these researchers demonstrated that neural networks can match or surpass humans in perception and narrow problem-solving but have themselves highlighted the gap between statistical learning and the ingenuity seen in human discovery.
  • Nick Bostrom: In Superintelligence (2014), Bostrom analysed risks and trajectories for machine intelligence exceeding humans, but acknowledged that qualitative leaps in creativity—paradigm shifts, theory building—remain a core uncertainty.
  • Gary Marcus: An outspoken critic of current AI, Marcus argues that without genuine causal reasoning and abstract knowledge, current models (including the most advanced deep learning systems) are far from truly creative intelligence.

Synthesis and Current Debates

Across these traditions, a consistent theme emerges: while AI has achieved superhuman accuracy, speed, and recall in structured domains, genuine creativity—the ability to abstract from prior knowledge to new paradigms—is still uniquely human. Dr. Fei-Fei Li, by foregrounding this distinction, not only situates herself within this lineage but also aligns her ongoing research on “large world models” with an explicit goal: to design AI tools that augment—but do not seek to supplant—human creative reasoning and abstract thought.

Her caution, rooted in both technical expertise and a broader philosophical perspective, stands as a rare check on techno-optimism. It articulates the stakes: as machine intelligence accelerates, the need to centre human capabilities, dignity, and judgement—especially in creativity and abstraction—becomes not just prudent but essential for responsibly shaping our shared future.

read more
Quote: Dr Eric Schmidt – Ex-Google CEO

Quote: Dr Eric Schmidt – Ex-Google CEO

“I worry a lot about … Africa. And the reason is: how does Africa benefit from [AI]? There’s obviously some benefit of globalisation, better crop yields, and so forth. But without stable governments, strong universities, major industrial structures – which Africa, with some exceptions, lacks – it’s going to lag.” – Dr Eric Schmidt – Former Google CEO

Dr Eric Schmidt’s observation stems from his experience at the highest levels of the global technology sector and his acute awareness of both the promise and the precariousness of the coming AI age. His warning about Africa’s risk of lagging in AI adoption and benefit is rooted in today’s uneven technological landscape and long-standing structural challenges facing the continent.

About Dr Eric Schmidt

Dr Eric Schmidt is one of the most influential technology executives of the 21st century. As CEO of Google from 2001 to 2011, he oversaw Google’s transformation from a Silicon Valley start-up into a global technology leader. Schmidt provided the managerial and strategic backbone that enabled Google’s explosive growth, product diversification, and a culture of robust innovation. After Google, he continued as Executive Chairman and Technical Advisor through Google’s restructuring into Alphabet, before transitioning to philanthropic and strategic advisory work. Notably, Schmidt has played significant roles in US national technology strategy, chairing the US National Security Commission on Artificial Intelligence and founding the bipartisan Special Competitive Studies Project, which advises on the intersections of AI, security, and economic competitiveness.

With a background encompassing leading roles at Sun Microsystems, Novell, and advisory positions at Xerox PARC and Bell Labs, Schmidt’s career reflects deep immersion in technology and innovation. He is widely regarded as a strategic thinker on the global opportunities and risks of technology, regularly offering perspective on how AI, digital infrastructure, and national competitiveness are shaping the future economic order.

Context of the Quotation

Schmidt’s remark appeared during a high-level panel at the Future Investment Initiative (FII9), in conversation with Dr Fei-Fei Li of Stanford and Peter Diamandis. The discussion centred on “What Happens When Digital Superintelligence Arrives?” and explored the likely economic, social, and geopolitical consequences of rapid AI advancement.

In this context, Schmidt identified a core risk: that AI’s benefits will accrue unevenly across borders, amplifying existing inequalities. He emphasised that while powerful AI tools may drive exceptional economic value and efficiencies—potentially in the trillions of dollars—these gains are concentrated by network effects, investment, and infrastructure. Schmidt singled out Africa as particularly vulnerable: absent stable governance, strong research universities, or robust industrial platforms—critical prerequisites for technology absorption—Africa faces the prospect of deepening relative underdevelopment as the AI era accelerates. The comment reflects a broader worry in technology and policy circles: global digitisation is likely to amplify rather than repair structural divides unless deliberate action is taken.

Leading Theorists and Thinking on the Subject

The dynamics Schmidt describes are at the heart of an emerging literature on the “AI divide,” digital colonialism, and the geopolitics of AI. Prominent thinkers in these debates include:

  • Professor Fei-Fei Li
    A leading AI scientist, Dr Li has consistently framed AI’s potential as contingent on human-centred design and equitable access. She highlights the distinction between the democratisation of access (e.g., cheaper healthcare or education via AI) and actual shared prosperity—which hinges on local capacity, policy, and governance. Her work underlines that technical progress does not automatically result in inclusive benefit, validating Schmidt’s concerns.
  • Kate Crawford and Timnit Gebru
    Both have written extensively on the risks of algorithmic exclusion, surveillance, and the concentration of AI expertise within a handful of countries and firms. In particular, Crawford’s Atlas of AI and Gebru’s leadership in AI ethics foreground how global AI development mirrors deeper resource and power imbalances.
  • Nick Bostrom and Stuart Russell
    Their theoretical contributions address the broader existential and ethical challenges of artificial superintelligence, but they also underscore risks of centralised AI power—technically and economically.
  • Ndubuisi Ekekwe, Bitange Ndemo, and Nanjira Sambuli
    These African thought leaders and scholars examine how Africa can leapfrog in digital adoption but caution that profound barriers—structural, institutional, and educational—must be addressed for the continent to benefit from AI at scale.
  • Eric Schmidt himself has become a touchstone in policy/tech strategy circles, having co-chaired the US National Security Commission on Artificial Intelligence. The Commission’s reports warned of a bifurcated world where AI capabilities—and thus economic and security advantages—are ever more concentrated.

Structural Elements Behind the Quote

Schmidt’s remark draws attention to a convergence of factors:

  • Institutional robustness
    Long-term AI prosperity requires stable governments, responsive regulatory environments, and a track record of supporting investment and innovation. This is lacking in many, though not all, of Africa’s economies.
  • Strong universities and research ecosystems
    AI innovation is talent- and research-intensive. Weak university networks limit both the creation and absorption of advanced technologies.
  • Industrial and technological infrastructure
    A mature industrial base enables countries and companies to adapt AI for local benefit. The absence of such infrastructure often results in passive consumption of foreign technology, forgoing participation in value creation.
  • Network effects and tech realpolitik
    Advanced AI tools, data centres, and large-scale compute power are disproportionately located in a few advanced economies. The ability to partner with these “hyperscalers”—primarily in the US—shapes national advantage. Schmidt argues that regions which fail to make strategic investments or partnerships risk being left further behind.

Summary

Schmidt’s statement is not simply a technical observation but an acute geopolitical and developmental warning. It reflects current global realities where AI’s arrival promises vast rewards, but only for those with the foundational economic, political, and intellectual capital in place. For policy makers, investors, and researchers, the implication is clear: bridging the digital-structural gap requires not only technology transfer but also building resilient, adaptive institutions and talent pipelines that are locally grounded.

read more
Quote: Trevor McCourt – Extropic CTO

Quote: Trevor McCourt – Extropic CTO

“We need something like 10 terawatts in the next 20 years to make LLM systems truly useful to everyone… Nvidia would need to 100× output… You basically need to fill Nevada with solar panels to provide 10 terawatts of power, at a cost around the world’s GDP. Totally crazy.” – Trevor McCourt – Extropic CTO

Trevor McCourt, Chief Technology Officer and co-founder of Extropic, has emerged as a leading voice articulating a paradox at the heart of artificial intelligence advancement: the technology that promises to democratise intelligence across the planet may, in fact, be fundamentally unscalable using conventional infrastructure. His observation about the terawatt imperative captures this tension with stark clarity—a reality increasingly difficult to dismiss as speculative.

Who Trevor McCourt Is

McCourt brings a rare convergence of disciplinary expertise to his role. Trained in mechanical engineering at the University of Waterloo (graduating 2015) and holding advanced credentials from the Massachusetts Institute of Technology (2020), he combines rigorous physical intuition with deep software systems architecture. Prior to co-founding Extropic, McCourt worked as a Principal Software Engineer, establishing a track record of delivering infrastructure at scale: he designed microservices-based cloud platforms that improved deployment speed by 40% whilst reducing operational costs by 30%, co-invented a patented dynamic caching algorithm for distributed systems, and led open-source initiatives that garnered over 500 GitHub contributors.

This background—spanning mechanical systems, quantum computation, backend infrastructure, and data engineering—positions McCourt uniquely to diagnose what others in the AI space have overlooked: that energy is not merely a cost line item but a binding physical constraint on AI’s future deployment model.

Extropic, which McCourt co-founded alongside Guillaume Verdon (formerly a quantum technology lead at Alphabet’s X division), closed a $14.1 million Series Seed funding round in 2023, led by Kindred Ventures and backed by institutional investors including Buckley Ventures, HOF Capital, and OSS Capital. The company now stands at approximately 15 people distributed across integrated circuit design, statistical physics research, and machine learning—a lean team assembled to pursue what McCourt characterises as a paradigm shift in compute architecture.

The Quote in Strategic Context

McCourt’s assertion that “10 terawatts in the next 20 years” is required for universal LLM deployment, coupled with his observation that this would demand filling Nevada with solar panels at a cost approaching global GDP, represents far more than rhetorical flourish. It is the product of methodical back-of-the-envelope engineering calculation.

His reasoning unfolds as follows:

From Today’s Baseline to Mass Deployment:
A text-based assistant operating at today’s reasoning capability (approximating GPT-5-Pro performance) deployed to every person globally would consume roughly 20% of the current US electrical grid—approximately 100 gigawatts. This is not theoretical; McCourt derives this from first principles: transformer models consume roughly 2 × (parameters × tokens) floating-point operations; modern accelerators like Nvidia’s H100 operate at approximately 0.7 picojoules per FLOP; population-scale deployment implies continuous, always-on inference at scale.

Adding Modalities and Reasoning:
Upgrade that assistant to include video capability at just 1 frame per second (envisioning Meta-style augmented-reality glasses worn by billions), and the grid requirement multiplies by approximately 10×. Enhance the reasoning capability to match models working on the ARC AGI benchmark—problems of human-level reasoning difficulty—and the text assistant alone requires a 10× expansion: 5 terawatts. Push further to expert-level systems capable of solving International Mathematical Olympiad problems, and the requirement reaches 100× the current grid.

Economic Impossibility:
A single gigawatt data centre costs approximately $10 billion to construct. The infrastructure required for mass-market AI deployment rapidly enters the hundreds of trillions of dollars—approaching or exceeding global GDP. Nvidia’s current manufacturing capacity would itself require a 100-fold increase to support even McCourt’s more modest scenarios.

Physical Reality Check:
Over the past 75 years, US grid capacity has grown remarkably consistently—a nearly linear expansion. Sam Altman’s public commitment to building one gigawatt of data centre capacity per week alone would require 3–5× the historical rate of grid growth. Credible plans for mass-market AI acceleration push this requirement into the terawatt range over two decades—a rate of infrastructure expansion that is not merely economically daunting but potentially physically impossible given resource constraints, construction timelines, and raw materials availability.

McCourt’s conclusion: the energy path is not simply expensive; it is economically and physically untenable. The paradigm must change.

Intellectual Foundations: Leading Theorists in Energy-Efficient Computing and Probabilistic AI

Understanding McCourt’s position requires engagement with the broader intellectual landscape that has shaped thinking about computing’s physical limits and probabilistic approaches to machine learning.

Geoffrey Hinton—Pioneering Energy-Based Models and Probabilistic Foundations:
Few figures loom larger in the theoretical background to Extropic’s work than Geoffrey Hinton. Decades before the deep learning boom, Hinton developed foundational theory around Boltzmann machines and energy-based models (EBMs)—the conceptual framework that treats learning as the discovery and inference of complex probability distributions. His work posits that machine learning, at its essence, is about fitting a probability distribution to observed data and then sampling from it to generate new instances consistent with that distribution. Hinton’s recognition with the 2023 Nobel Prize in Physics for “foundational discoveries and inventions that enable machine learning with artificial neural networks” reflects the deep prescience of this probabilistic worldview. More than theoretical elegance, this framework points toward an alternative computational paradigm: rather than spending vast resources on deterministic matrix operations (the GPU model), a system optimised for efficient sampling from complex distributions would align computation with the statistical nature of intelligence itself.

Michael Frank—Physics of Reversible and Adiabatic Computing:
Michael Frank, a senior scientist now at Vaire (a near-zero-energy chip company), has spent decades at the intersection of physics and computing. His research programme, initiated at MIT in the 1990s and continued at the University of Florida, Florida State, and Sandia National Laboratories, focuses on reversible computing and adiabatic CMOS—techniques aimed at reducing the fundamental energy cost of information processing. Frank’s work addresses a deep truth: in conventional digital logic, information erasure is thermodynamically irreversible and expensive, dissipating energy as heat. By contrast, reversible computing minimises such erasure, thereby approaching theoretical energy limits set by physics rather than by engineering convention. Whilst Frank’s trajectory and Extropic’s diverge in architectural detail, both share the conviction that energy efficiency must be rooted in physical first principles, not merely in engineering optimisation of existing paradigms.

Yoshua Bengio and Chris Bishop—Probabilistic Learning Theory:
Leading researchers in deep generative modelling—including Bengio, Bishop, and others—have consistently advocated for probabilistic frameworks as foundational to machine learning. Their work on diffusion models, variational inference, and sampling-based approaches has legitimised the view that efficient inference is not about raw compute speed but about statistical appropriateness. This theoretical lineage underpins the algorithmic choices at Extropic: energy-based models and denoising thermodynamic models are not novel inventions but rather a return to first principles, informed by decades of probabilistic ML research.

Richard Feynman—Foundational Physics of Computing:
Though less directly cited in contemporary AI discourse, Feynman’s 1982 lectures on the physics of computation remain conceptually foundational. Feynman observed that computation’s energy cost is ultimately governed by physical law, not engineering ingenuity alone. His observations on reversibility and the thermodynamic cost of irreversible operations informed the entire reversible-computing movement and, by extension, contemporary efforts to align computation with physics rather than against it.

Contemporary Systems Thinkers (Sam Altman, Jensen Huang):
Counterintuitively, McCourt’s critique is sharpened by engagement with the visionary statements of industry leaders who have perhaps underestimated energy constraints. Altman’s commitment to building one gigawatt of data centre capacity per week, and Huang’s roadmaps for continued GPU scaling, have inadvertently validated McCourt’s concern: even the most optimistic industrial plans require infrastructure expansion at rates that collide with physical reality. McCourt uses their own projections as evidence for the necessity of paradigm change.

The Broader Strategic Narrative

McCourt’s remarks must be understood within a convergence of intellectual and practical pressures:

The Efficiency Plateau:
Digital logic efficiency, measured as energy per operation, has stalled. Transistor capacitance plateaued around the 10-nanometre node; operating voltage is thermodynamically bounded near 300 millivolts. Architectural optimisations (quantisation, sparsity, tensor cores) improve throughput but do not overcome these physical barriers. The era of “free lunch” efficiency gains from Moore’s Law miniaturisation has ended.

Model Complexity Trajectory:
Whilst small models have improved at fixed benchmarks, frontier AI systems—those solving novel, difficult problems—continue to demand exponentially more compute. AlphaGo required ~1 exaFLOP per game; AlphaCode required ~100 exaFLOPs per coding problem; the system solving International Mathematical Olympiad problems required ~100,000 exaFLOPs. Model miniaturisation is not offsetting capability ambitions.

Market Economics:
The AI market has attracted trillions in capital precisely because the economic potential is genuine and vast. Yet this same vastness creates the energy paradox: truly universal AI deployment would consume resources incompatible with global infrastructure and economics. The contradiction is not marginal; it is structural.

Extropic’s Alternative:
Extropic proposes to escape this local minimum through radical architectural redesign. Thermodynamic Sampling Units (TSUs)—circuits architected as arrays of probabilistic sampling cells rather than multiply-accumulate units—would natively perform the statistical operations that diffusion and generative AI models require. Early simulations suggest energy efficiency improvements of 10,000× on simple benchmarks compared to GPU-based approaches. Hybrid algorithms combining TSUs with compact neural networks on conventional hardware could deliver intermediate gains whilst establishing a pathway toward a fundamentally different compute paradigm.

Why This Matters Now

The quote’s urgency reflects a dawning recognition across technical and policy circles that energy is not a peripheral constraint but the central bottleneck determining AI’s future trajectory. The choice, as McCourt frames it, is stark: either invest in a radically new architecture, or accept that mass-market AI remains perpetually out of reach—a luxury good confined to the wealthy and powerful rather than a technology accessible to humanity.

This is not mere speculation or provocation. It is engineering analysis grounded in physics, economics, and historical precedent, articulated by someone with the technical depth to understand both the problem and the extraordinary difficulty of solving it.

read more
Quote: Trevor McCourt – Extropic CTO

Quote: Trevor McCourt – Extropic CTO

“If you upgrade that assistant to see video at 1 FPS – think Meta’s glasses… you’d need to roughly 10× the grid to accommodate that for everyone. If you upgrade the text assistant to reason at the level of models working on the ARC AGI benchmark… even just the text assistant would require around a 10× of today’s grid.” – Trevor McCourt – Extropic CTO

The quoted remark by Trevor McCourt, CTO of Extropic, underscores a crucial bottleneck in artificial intelligence scaling: energy consumption outpaces technological progress in compute efficiency, threatening the viability of universal, always-on AI. The quote translates hard technical extrapolation into plain language—projecting that if every person were to have a vision-capable assistant running at just 1 video frame per second, or if text models achieved a level of reasoning comparable to ARC AGI benchmarks, global energy infrastructure would need to multiply several times over, amounting to many terawatts—figures that quickly reach into economic and physical absurdity.

Backstory and Context of the Quote & Trevor McCourt

Trevor McCourt is the co-founder and Chief Technology Officer of Extropic, a pioneering company targeting the energy barrier limiting mass-market AI deployment. With multidisciplinary roots—a blend of mechanical engineering and quantum programming, honed at the University of Waterloo and Massachusetts Institute of Technology—McCourt contributed to projects at Google before moving to the hardware-software frontier. His leadership at Extropic is defined by a willingness to challenge orthodoxy and champion a first-principles, physics-driven approach to AI compute architecture.

The quote arises from a keynote on how present-day large language models and diffusion AI models are fundamentally energy-bound. McCourt’s analysis is rooted in practical engineering, economic realism, and deep technical awareness: the computational demands of state-of-the-art assistants vastly outstrip what today’s grid can provide if deployed at population scale. This is not merely an engineering or machine learning problem, but a macroeconomic and geopolitical dilemma.

Extropic proposes to address this impasse with Thermodynamic Sampling Units (TSUs)—a new silicon compute primitive designed to natively perform probabilistic inference, consuming orders of magnitude less power than GPU-based digital logic. Here, McCourt follows the direction set by energy-based probabilistic models and advances it both in hardware and algorithm.

McCourt’s career has been defined by innovation at the technical edge: microservices in cloud environments, patented improvements to dynamic caching in distributed systems, and research in scalable backend infrastructure. This breadth, from academic research to commercial deployment, enables his holistic critique of the GPU-centred AI paradigm, as well as his leadership at Extropic’s deep technology startup.

Leading Theorists & Influencers in the Subject

Several waves of theory and practice converge in McCourt’s and Extropic’s work:

1. Geoffrey Hinton (Energy-Based and Probabilistic Models):
Long before deep learning’s mainstream embrace, Hinton’s foundational work on Boltzmann machines and energy-based models explored the idea of learning and inference as sampling from complex probability distributions. These early probabilistic paradigms anticipated both the difficulties of scaling and the algorithmic challenges that underlie today’s generative models. Hinton’s recognition—including the Nobel Prize for work on energy-based models—cements his stature as a theorist whose footprints underpin Extropic’s approach.

2. Michael Frank (Reversible Computing)
Frank is a prominent physicist in reversible and adiabatic computing, having led major advances at MIT, Sandia National Laboratories, and others. His research investigates how the physics of computation can reduce the fundamental energy cost—directly relevant to Extropic’s mission. Frank’s focus on low-energy information processing provides a conceptual environment for approaches like TSUs to flourish.

3. Chris Bishop & Yoshua Bengio (Probabilistic Machine Learning):
Leaders like Bishop and Bengio have shaped the field’s probabilistic foundations, advocating both for deep generative models and for the practical co-design of hardware and algorithms. Their research has stressed the need to reconcile statistical efficiency with computational tractability—a tension at the core of Extropic’s narrative.

4. Alan Turing & John von Neumann (Foundations of Computing):
While not direct contributors to modern machine learning, the legacies of Turing and von Neumann persist in every conversation about alternative architectures and the physical limits of computation. The post-von Neumann and post-Turing trajectory, with a return to analogue, stochastic, or sampling-based circuitry, is directly echoed in Extropic’s work.

5. Recent Industry Visionaries (e.g., Sam Altman, Jensen Huang):
Contemporary leaders in the AI infrastructure space—such as Altman of OpenAI and Huang of Nvidia—have articulated the scale required for AGI and the daunting reality of terawatt-scale compute. Their business strategies rely on the assumption that improved digital hardware will be sufficient, a view McCourt contests with data and physical models.

Strategic & Scientific Context for the Field

  • Core problem: The energy that powers AI is reaching non-linear scaling—mass-market AI could consume a significant fraction or even multiples of the entire global grid if naively scaled with today’s architectures.
  • Physics bottlenecks: Improvements in digital logic are limited by physical constants: capacitance, voltage, and the energy required for irreversible computation. Digital logic has plateaued at the 10nm node.
  • Algorithmic evolution: Traditional deep learning is rooted in deterministic matrix computations, but the true statistical nature of intelligence calls for sampling from complex distributions—as foregrounded in Hinton’s work and now implemented in Extropic’s TSUs.
  • Paradigm shift: McCourt and contemporaries argue for a transition to native hardware–software co-design where the core computational primitive is no longer the multiply–accumulate (MAC) operation, but energy-efficient probabilistic sampling.

Summary Insight

Trevor McCourt anchors his cautionary prognosis for AI’s future on rigorous cross-disciplinary insights—from physical hardware limits to probabilistic learning theory. By combining his own engineering prowess with the legacy of foundational theorists and contemporary thinkers, McCourt’s perspective is not simply one of warning but also one of opportunity: a new generation of probabilistic, thermodynamically-inspired computers could rewrite the energy economics of artificial intelligence, making “AI for everyone” plausible—without grid-scale insanity.

read more
Quote: Alex Karp – Palantir CEO

Quote: Alex Karp – Palantir CEO

“The idea that chips and ontology is what you want to short is batsh*t crazy.” – Alex Karp -Palantir CEO

Alex Karp, co-founder and CEO of Palantir Technologies, delivered the now widely-circulated statement, “The idea that chips and ontology is what you want to short is batsh*t crazy,” in response to famed investor Michael Burry’s high-profile short positions against both Palantir and Nvidia. This sharp retort came at a time when Palantir, an enterprise software and artificial intelligence (AI) powerhouse, had just reported record earnings and was under intense media scrutiny for its meteoric stock rise and valuation.

Context of the Quote

The remark was made in early November 2025 during a CNBC interview, following public disclosures that Michael Burry—of “The Big Short” fame—had taken massive short positions in Palantir and Nvidia, two companies at the heart of the AI revolution. Burry’s move, reminiscent of his contrarian bets during the 2008 financial crisis, was interpreted by the market as both a challenge to the soaring “AI trade” and a critique of the underlying economics fueling the sector’s explosive growth.

Karp’s frustration was palpable: not only was Palantir producing what he described as “anomalous” financial results—outpacing virtually all competitors in growth, cash flow, and customer retention—but it was also emerging as the backbone of data-driven operations across government and industry. For Karp, Burry’s short bet went beyond traditional market scepticism; it targeted firms, products (“chips” and “ontology”—the foundational hardware for AI and the architecture for structuring knowledge), and business models proven to be both technically indispensable and commercially robust. Karp’s rejection of the “short chips and ontology” thesis underscores his belief in the enduring centrality of the technologies underpinning the modern AI stack.

Backstory and Profile: Alex Karp

Alex Karp stands out as one of Silicon Valley’s true iconoclasts:

  • Background and Education: Born in New York City in 1967, Karp holds a philosophy degree from Haverford College, a JD from Stanford, and a PhD in social theory from Goethe University Frankfurt, where he studied under and wrote about the influential philosopher Jürgen Habermas. This rare academic pedigree—blending law, philosophy, and critical theory—deeply informs both his contrarian mindset and his focus on the societal impact of technology.
  • Professional Arc: Before founding Palantir in 2004 with Peter Thiel and others, Karp had forged a career in finance, running the London-based Caedmon Group. At Palantir, he crafted a unique culture and business model, combining a wellness-oriented, sometimes spiritual corporate environment with the hard-nosed delivery of mission-critical systems for Western security, defence, and industry.
  • Leadership and Philosophy: Karp is known for his outspoken, unconventional leadership. Unafraid to challenge both Silicon Valley’s libertarian ethos and what he views as the groupthink of academic and financial “expert” classes, he publicly identifies as progressive—yet separates himself from establishment politics, remaining both a supporter of the US military and a critic of mainstream left and right ideologies. His style is at once brash and philosophical, combining deep skepticism of market orthodoxy with a strong belief in the capacity of technology to deliver real-world, not just notional, value.
  • Palantir’s Rise: Under Karp, Palantir grew from a niche contractor to one of the world’s most important data analytics and AI companies. Palantir’s products are deeply embedded in national security, commercial analytics, and industrial operations, making the company essential infrastructure in the rapidly evolving AI economy.

Theoretical Background: ‘Chips’ and ‘Ontology’

Karp’s phrase pairs two of the foundational concepts in modern AI and data-driven enterprise:

  • Chips: Here, “chips” refers specifically to advanced semiconductors (such as Nvidia’s GPUs) that provide the computational horsepower essential for training and deploying cutting-edge machine learning models. The AI revolution is inseparable from advances in chip design, leading to historic demand for high-performance hardware.
  • Ontology: In computer and information science, “ontology” describes the formal structuring and categorising of knowledge—making data comprehensible, searchable, and actionable by algorithms. Robust ontologies enable organisations to unify disparate data sources, automate analytical reasoning, and achieve the “second order” efficiencies of AI at scale.

Leading theorists in the domain of ontology and AI include:

  • John McCarthy: A founder of artificial intelligence, McCarthy’s foundational work on formal logic and semantics laid groundwork for modern ontological structures in AI.
  • Tim Berners-Lee: Creator of the World Wide Web, Berners-Lee developed the Semantic Web, championing knowledge structuring via ontologies—enabling data to be machine-readable and all but indispensable for AI’s next leap.
  • Thomas Gruber: Known for his widely cited definition of ontology in AI as “a specification of a conceptualisation,” Gruber’s research shaped the field’s approach to standardising knowledge representations for complex applications.

In the chip space, the pioneering work of:

  • Jensen Huang: CEO and co-founder of Nvidia, drove the company’s transformation from graphics to AI acceleration, cementing the centrality of chips as the hardware substrate for everything from generative AI to advanced analytics.
  • Gordon Moore and Robert Noyce: Their early explorations in semiconductor fabrication set the stage for the exponential hardware progress that enabled the modern AI era.

Insightful Context for the Modern Market Debate

The “chips and ontology” remark reflects a deep divide in contemporary technology investing:

  • On one side, sceptics like Burry see signs of speculative excess, reminiscent of prior bubbles, and bet against companies with high valuations—even when those companies dominate core technologies fundamental to AI.
  • On the other, leaders like Karp argue that while the broad “AI trade” risks pockets of overvaluation, the engine—the computational hardware (chips) and data-structuring logic (ontology)—are not just durable, but irreplaceable in the digital economy.

With Palantir and Nvidia at the centre of the current AI-driven transformation, Karp’s comment captures not just a rebuttal to market short-termism, but a broader endorsement of the foundational technologies that define the coming decade. The value of “chips and ontology” is, in Karp’s eyes, anchored not in market narrative but in empirical results and business necessity—a perspective rooted in a unique synthesis of philosophy, technology, and radical pragmatism.

read more
Quote: Fyodor Dostoevsky – Russian novelist, essayist and journalist

Quote: Fyodor Dostoevsky – Russian novelist, essayist and journalist

“A man who lies to himself, and believes his own lies becomes unable to recognize truth, either in himself or in anyone else, and he ends up losing respect for himself and for others. When he has no respect for anyone, he can no longer love, and, in order to divert himself, having no love in him, he yields to his impulses, indulges in the lowest forms of pleasure, and behaves in the end like an animal. And it all comes from lying – lying to others and to yourself.” – Fyodor Dostoevsky – Russian novelist, essayist and journalist

Fyodor Mikhailovich Dostoevsky (November 11, 1821 – February 9, 1881) was a Russian novelist, essayist, and journalist who explored the depths of the human psyche with unflinching honesty. Born in Moscow to a family of modest means, Dostoevsky’s early life was marked by the emotional distance of his parents and an eventual tragedy when his father was murdered. He trained as a military engineer but pursued literature with relentless ambition, achieving early success with novels such as Poor Folk and The Double.

Dostoevsky’s life took a dramatic turn in 1849 when he was arrested for participating in a radical intellectual group. Sentenced to death, he faced a mock execution before his sentence was commuted to four years of hard labor in Siberia followed by military service. This harrowing experience, combined with his life among Russia’s poor, profoundly shaped his worldview and writing. His later years were marked by personal loss—the deaths of his first wife and his brother—and financial hardship, yet he produced some of literature’s greatest works during this time, including Crime and Punishment, The Idiot, Devils, and The Brothers Karamazov.

Dostoevsky’s writings are celebrated for their psychological insight and existential depth. He scrutinized themes of morality, free will, faith, and the consequences of self-deception—topics that continue to resonate in philosophy, theology, and modern psychology. His funeral drew thousands, reflecting his status as a national hero and one of Russia’s most influential thinkers.

Context of the Quote

The quoted passage is widely attributed to Dostoevsky, most notably appearing in The Brothers Karamazov, his final and perhaps most philosophically ambitious novel. The novel, published in serial form shortly before his death, wrestles with questions of faith, doubt, and the consequences of living a lie.

The quote is spoken by the Elder Zosima, a wise and compassionate monk in the novel. Zosima’s teachings in The Brothers Karamazov frequently address the dangers of self-deception and the importance of spiritual and moral honesty. In this passage, Dostoevsky is warning that lying to oneself is not merely a moral failing, but a fundamental corruption of perception and being. The progression—from dishonesty to self-deception, to the loss of respect for oneself and others, and ultimately to the decay of love and humanity—paints a stark picture of spiritual decline.

This theme is central to Dostoevsky’s work: characters who deceive themselves often spiral into psychological and moral crises. Dostoevsky saw truth—even when painful—as a prerequisite for authentic living. His novels repeatedly show how lies, whether to oneself or others, lead to alienation, suffering, and a loss of authentic connection.

Leading Theorists on Self-Deception

While Dostoevsky is renowned in literature for his treatment of self-deception, the theme has also been explored by philosophers, psychologists, and sociologists. Below is a brief overview of leading theorists and their contributions:

Philosophers

  • Søren Kierkegaard (1813–1855): The Danish philosopher explored the idea of existential self-deception, particularly in The Sickness Unto Death, where he describes how humans avoid the despair of being true to themselves by living inauthentic lives, what he calls “despair in weakness.”
  • Jean-Paul Sartre (1905–1980): In Being and Nothingness, Sartre popularized the concept of “bad faith” (mauvaise foi), the act of deceiving oneself to avoid the anxiety of freedom and responsibility. Sartre’s ideas are often seen as a philosophical counterpart to Dostoevsky’s literary explorations.
  • Friedrich Nietzsche (1844–1900): Nietzsche’s concept of “resentment” and the “will to power” also touches on self-deception, particularly how individuals and societies construct false narratives to justify their weaknesses or desires.

Psychologists

  • Sigmund Freud (1856–1939): Freud introduced the idea of defence mechanisms, such as denial and rationalization, as ways the psyche protects itself from uncomfortable truths—essentially systematizing the process of self-deception.
  • Donald Winnicott (1896–1971): The psychoanalyst discussed the “false self,” a persona developed to comply with external demands, often leading to inner conflict and emotional distress.
  • Erich Fromm (1900–1980): Fromm, like Dostoevsky, examined how modern society encourages escape from freedom and the development of “automaton conformity,” where individuals conform to avoid anxiety and uncertainty.

Modern Thinkers

  • Dan Ariely (b. 1967): The behavioural economist has shown experimentally how dishonesty often begins with small, self-serving lies that gradually erode ethical boundaries.
  • Robert Trivers (b. 1943): The evolutionary biologist proposed that self-deception evolved as a strategy to better deceive others, which ironically can make personal delusions more convincing.

Legacy and Insight

Dostoevsky’s insights into the dangers of self-deception remain remarkably relevant today. His work, together with that of philosophers and psychologists, invites reflection on the necessity of honesty—not just to others, but to oneself—for psychological health and authentic living. The consequences of failing this honesty, as Dostoevsky depicts, are not merely moral, but existential: they impact our ability to respect, love, and ultimately, to live fully human lives.

By placing this quote in context, we see not only the literary brilliance of Dostoevsky but also the enduring wisdom of his diagnosis of the human condition—a call to self-awareness that echoes through generations and disciplines.

read more
Quote: Dee Hock

Quote: Dee Hock

“An organisation, no matter how well designed, is only as good as the people who live and work in it.” – Dee Hock

read more
Quote: James Cash Penney

Quote: James Cash Penney

“The keystone of successful business is cooperation. Friction retards progress.” – James Cash Penney

read more
Quote: Paul J Meyer

Quote: Paul J Meyer

“Communication – the human connection – is the key to personal and career success.” – Paul J. Meyer

read more
Quote: Beverly Sills

Quote: Beverly Sills

“You may be disappointed if you fail, but you are doomed if you don’t try”. – Beverly Sills

read more
Quote: Marc Benioff

Quote: Marc Benioff

“Innovation is not a destination; it’s a journey.” – Marc Benioff

read more

Download brochure

Introduction brochure

What we do, case studies and profiles of some of our amazing team.

Download

Our latest podcasts on Spotify

Sign up for our newsletters - free

Global Advisors | Quantified Strategy Consulting