Select Page

News and Tools

Breaking Business News

 

Our selection of the top business news sources on the web.

Quote: Sam Walton – American retail pioneer

Quote: Sam Walton – American retail pioneer

“Great ideas come from everywhere if you just listen and look for them. You never know who’s going to have a great idea.” – Sam Walton – American retail pioneer

This quote epitomises Sam Walton’s core leadership principle—openness to ideas from all levels of an organisation. Walton, the founder of Walmart and Sam’s Club, was known for his relentless focus on operational efficiency, cost leadership, and, crucially, a culture that actively valued contributions from employees at every tier.

Walton’s approach stemmed from his own lived experience. Born in 1918 in rural Oklahoma, he grew up during the Great Depression—a time that instilled a profound respect for hard work and creative problem-solving. After service in the US Army, he managed a series of Ben Franklin variety stores. Denied the opportunity to pilot a new discount retail model by his franchisor, Walton struck out on his own, opening the first Walmart in Rogers, Arkansas in 1962, funded chiefly through personal risk and relentless ambition.

From the outset, Walton positioned himself as a learner—famously travelling across the United States to observe competitors and often spending time on the shop floor listening to the insights of front-line staff and customers. He believed valuable ideas could emerge from any source—cashiers, cleaners, managers, or suppliers—and his instinct was to capitalise on this collective intelligence.

His management style, shaped by humility and a drive to democratise innovation, helped Walmart scale from a single store to the world’s largest retailer by the early 1990s. The company’s relentless growth and robust internal culture were frequently attributed to Walton’s ability to source improvements and innovations bottom-up rather than solely relying on top-down direction.

About Sam Walton

Sam Walton (1918–1992) was an American retail pioneer who, from modest beginnings, changed global retailing. His vision for Walmart was centred on three guiding principles:

  • Offering low prices for everyday goods.
  • Maintaining empathetic customer service.
  • Cultivating a culture of shared ownership and continual improvement through employee engagement.

Despite his immense success and wealth, Walton was celebrated for his modesty—driving a used pickup, wearing simple clothes, and living in the same town where his first store opened. He ultimately built a business empire that, by 1992, encompassed over 2,000 stores and employed more than 380,000 people.

Leading Theorists Related to the Subject Matter

Walton’s quote and philosophy connect to three key schools of thought in innovation and management theory:

1. Peter Drucker
Peter Drucker, often called the father of modern management, advocated for management by walking around: leaders should remain closely connected to their organisations and use the intelligence of their workforce to inform decision-making. Drucker taught that innovation is an organisational discipline, not the exclusive preserve of senior leadership or R&D specialists.

2. Henry Chesbrough
Chesbrough developed the concept of open innovation, which posits that breakthrough ideas often originate outside a company’s traditional boundaries. He argued that organisations should purposefully encourage inflow and outflow of knowledge to accelerate innovation and create value, echoing Walton’s insistence that great ideas can (and should) come from anywhere.

3. Simon Sinek
In his influential work Start with Why, Sinek explores the notion that transformational leaders elicit deep engagement and innovative thinking by grounding teams in purpose (“Why”). Sinek identifies that companies weld innovation into their DNA when leaders empower all employees to contribute to improvement and strategic direction.

Theorist
Core Idea
Relevance to Walton’s Approach
Peter Drucker
Management by walking around; broad-based engagement
Walton’s direct engagement with staff
Henry Chesbrough
Open innovation; ideas flow in and out of the organisation
Walton’s receptivity beyond hierarchy
Simon Sinek
Purpose-based leadership for innovation and loyalty
Walton’s mission-driven, inclusive ethos

Additional Relevant Thinkers and Concepts

  • Clayton Christensen: In The Innovator’s Dilemma, he highlights the role of disruptive innovation which is frequently initiated by those closest to the customer or the front line, not at the corporate pinnacle.
  • Eric Ries: In The Lean Startup, Ries argues it is the fast feedback and agile learning from the ground up that enables organisations to innovate ahead of competitors—a direct parallel to Walton’s method of sourcing and testing ideas rapidly in store environments.

Sam Walton’s lasting impact is not just Walmart’s size, but the conviction that listening widely—to employees, customers, and the broader community—unlocks the innovations that fuel lasting competitive advantage. This belief is increasingly echoed in modern leadership thinking and remains foundational for organisations hoping to thrive in a fast-changing world.

read more
Quote: Dr Eric Schmidt – Ex-Google CEO

Quote: Dr Eric Schmidt – Ex-Google CEO

“The win will be teaming between a human and their judgment and a supercomputer and what it can think.” – Dr Eric Schmidt – Former Google CEO

Dr Eric Schmidt is recognised globally as a principal architect of the modern digital era. He served as CEO of Google from 2001 to 2011, guiding its evolution from a fast-growing startup into a cornerstone of the tech industry. His leadership was instrumental in scaling Google’s infrastructure, accelerating product innovation, and instilling a model of data-driven culture that underpins contemporary algorithms and search technologies. After stepping down as CEO, Schmidt remained pivotal as Executive Chairman and later as Technical Advisor, shepherding Google’s transition to Alphabet and advocating for long-term strategic initiatives in AI and global connectivity.

Schmidt’s influence extends well beyond corporate leadership. He has played policy-shaping roles at the highest levels, including chairing the US National Security Commission on Artificial Intelligence and advising multiple governments on technology strategy. His career is marked by a commitment to both technical progress and the responsible governance of innovation, positioning him at the centre of debates on AI’s promises, perils, and the necessity of human agency in the face of accelerating machine intelligence.

Context of the Quotation: Human–AI Teaming

Schmidt’s statement emerged during high-level discussions about the trajectory of AI, particularly in the context of autonomous systems, advanced agents, and the potential arrival of superintelligent machines. Rather than portraying AI as a force destined to replace humans, Schmidt advocates a model wherein the greatest advantage arises from joint endeavour: humans bring creativity, ethical discernment, and contextual understanding, while supercomputers offer vast capacity for analysis, pattern recognition, and iterative reasoning.

This principle is visible in contemporary AI deployments. For example:

  • In drug discovery, AI systems can screen millions of molecular variants in a day, but strategic insights and hypothesis generation depend on human researchers.
  • In clinical decision-making, AI augments the observational scope of physicians—offering rapid, precise diagnoses—but human judgement is essential for nuanced cases and values-driven choices.
  • Schmidt points to future scenarios where “AI agents” conduct scientific research, write code by natural-language command, and collaborate across domains, yet require human partnership to set objectives, interpret outcomes, and provide oversight.
  • He underscores that autonomous AI agents, while powerful, must remain under human supervision, especially as they begin to develop their own procedures and potentially opaque modes of communication.

Underlying this vision is a recognition: AI is a multiplier, not a replacement, and the best outcomes will couple human judgement with machine cognition.

Relevant Leading Theorists and Critical Backstory

This philosophy of human–AI teaming aligns with and is actively debated by several leading theorists:

  • Stuart Russell
    Professor at UC Berkeley, Russell is renowned for his work on human-compatible AI. He contends that the long-term viability of artificial intelligence requires that systems are designed to understand and comply with human preferences and values. Russell has championed the view that human oversight and interpretability are non-negotiable as intelligence systems become more capable and autonomous.
  • Fei-Fei Li
    Stanford Professor and co-founder of AI4ALL, Fei-Fei Li is a major advocate for “human-centred AI.” Her research highlights that AI should augment human potential, not supplant it, and she stresses the critical importance of interdisciplinary collaboration. She is a proponent of AI systems that foster creativity, support decision-making, and preserve agency and dignity.
  • Demis Hassabis
    Founder and CEO of DeepMind, Hassabis’s group famously developed AlphaGo and AlphaFold. DeepMind’s work demonstrates the principle of human–machine teaming: AI systems solve previously intractable problems, such as protein folding, that can only be understood and validated with strong human scientific context.
  • Gary Marcus
    A prominent AI critic and academic, Marcus warns against overestimating current AI’s capacity for judgment and abstraction. He pursues hybrid models where symbolic reasoning and statistical learning are paired with human input to overcome the limitations of “black-box” models.
  • Eric Schmidt’s own contributions reflect active engagement with these paradigms, from his advocacy for AI regulatory frameworks to public warnings about the risks of unsupervised AI, including “unplugging” AI systems that operate beyond human understanding or control.

Structural Forces and Implications

Schmidt’s perspective is informed by several notable trends:

  • Expansion of infinite context windows: Models can now process millions of words and reason through intricate problems with humans guiding multi-step solutions, a paradigm shift for fields like climate research, pharmaceuticals, and engineering.
  • Proliferation of autonomous agents: AI agents capable of learning, experimenting, and collaborating independently across complex domains are rapidly becoming central; their effectiveness maximised when humans set goals and interpret results.
  • Democratisation paired with concentration of power: As AI accelerates innovation, the risk of centralised control emerges; Schmidt calls for international cooperation and proactive governance to keep objectives aligned with human interests.
  • Chain-of-thought reasoning and explainability: Advanced models can simulate extended problem-solving, but meaningful solutions depend on human guidance, interpretation, and critical thinking.

Summary

Eric Schmidt’s quote sits at the intersection of optimistic technological vision and pragmatic governance. It reflects decades of strategic engagement with digital transformation, and echoes leading theorists’ consensus: the future of AI is collaborative, and its greatest promise lies in amplifying human judgment with unprecedented computational support. Realising this future will depend on clear policies, interdisciplinary partnership, and an unwavering commitment to ensuring technology remains a tool for human advancement—and not an unfettered automaton beyond our reach.

read more
Quote: Dr. Fei-Fei Li – Stanford Professor – world-renowned authority in artificial intelligence

Quote: Dr. Fei-Fei Li – Stanford Professor – world-renowned authority in artificial intelligence

“I do think countries all should invest in their own human capital, invest in partnerships and invest in their own technological stack as well as the business ecosystem… I think not investing in AI would be macroscopically the wrong thing to do.” – Dr. Fei-Fei Li – Stanford Professor – world-renowned authority in artificial intelligence

The statement was delivered during a high-stakes panel discussion on artificial superintelligence, convened at the Future Investment Initiative in Riyadh, where nation-state leaders, technologists, and investors gathered to assess their strategic positioning in the emerging AI era. Her words strike at the heart of a dilemma facing governments worldwide: how to build national AI capabilities whilst avoiding the trap of isolationism, and why inaction would be economically and strategically untenable.

Context: The Geopolitical Stakes of AI Investment

The Historical Moment

Dr. Li’s statement comes at a critical juncture. By late 2024 and into 2025, artificial intelligence had transitioned from speculative technology to demonstrable economic driver. Estimates suggested AI could generate between $15 trillion and $20 trillion in economic value globally by 2030—a figure larger than the current gross domestic product of most nations. This windfall is not distributed evenly; rather, it concentrates among early movers with capital, infrastructure, and talent. The race is on, and the stakes are existential for national competitiveness, employment, and geopolitical influence.

In this landscape, a nation that fails to invest in AI capabilities risks profound economic displacement. Yet Dr. Li is equally clear: isolation is counterproductive. The most realistic path forward combines three pillars:

  • Human Capital: The talent to conceive, build, and deploy AI systems
  • Partnerships: Strategic alliances, particularly with leading technological ecosystems (the US hyperscalers, for instance)
  • Domestic Technological Infrastructure: The local research bases, venture capital, regulatory frameworks, and business ecosystems that enable sustained innovation

This is not a counsel of surrender to Silicon Valley hegemony, but rather a sophisticated argument about comparative advantage and integration within global technological networks.

Dr. Fei-Fei Li: The Person and Her Arc

Early Life and Foundational Values

Dr. Fei-Fei Li’s perspective is shaped by her personal trajectory. Born in Chengdu, China, she emigrated to the United States at age fifteen, settling in New Jersey where her parents ran a small business. This background infuses her thinking: she understands both the promise of technological mobility and the structural barriers that constrain developing economies. She obtained her undergraduate degree in physics from Princeton University in 1999, with high honours, before pursuing doctoral studies at the California Institute of Technology, where she worked across computer science, electrical engineering, and cognitive neuroscience, earning her PhD in 2005.

The ImageNet Revolution

In 2007, whilst at Princeton, Dr. Li embarked on a project that would reshape artificial intelligence. Observing that cognitive psychologist Irving Biederman estimated humans recognise approximately 30,000 object categories, Li conceived ImageNet: a massive, hierarchically organised visual database. Colleagues dismissed the scale as impractical. Undeterred, she led a team (including Princeton professors Jia Deng, Kai Li, and Wei Dong) that leveraged Amazon Mechanical Turk to label over 14 million images across 22,000 categories.

By 2009, ImageNet was published. More critically, the team created the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), an annual competition that invited researchers worldwide to develop algorithms for image classification. This contest became the crucible in which modern deep learning was forged. When Geoffrey Hinton’s group achieved a breakthrough using convolutional neural networks in 2012, winning the competition by a decisive margin, the deep learning revolution was catalysed. ImageNet is now recognised as one of the three foundational forces in the birth of modern AI.

What is instructive here is that Dr. Li’s contribution was not merely technical but infrastructural: she created a shared resource that democratised AI research globally. Academic groups from universities across continents—not just Silicon Valley—could compete on equal footing. This sensibility—that progress depends on enabling distributed talent—runs through her subsequent work.

Career Architecture and Strategic Leadership

Following her Princeton years, Dr. Li joined Stanford University in 2009, eventually becoming the Sequoia Capital Professor of Computer Science—a title of singular prestige. From 2013 to 2018, she directed Stanford’s Artificial Intelligence Lab (SAIL), one of the world’s premier research institutes. Her publications exceed 400 papers in top-tier venues, and she remains one of the most cited computer scientists of her generation.

During a sabbatical from Stanford (January 2017 to September 2018), Dr. Li served as Vice President and Chief Scientist of AI/ML at Google Cloud. Her mandate was to democratise AI technology, lowering barriers for businesses and developers—work that included advancing products like AutoML, which enabled organisations without deep AI expertise to deploy machine learning systems.

Upon returning to Stanford in 2019, she became the founding co-director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI), an explicitly multidisciplinary initiative spanning computer science, social sciences, humanities, law, and medicine—all united by the conviction that AI must serve human flourishing, not vice versa.

Current Work and World Labs

Most recently, Dr. Li co-founded and serves as chief executive officer of World Labs, an AI company focused on spatial intelligence and generative world models. This venture extends her intellectual agenda: if large language models learn patterns over text, world models learn patterns over 3D environments, enabling machines to understand, simulate, and reason about physical and virtual spaces. For robotics, healthcare simulation, autonomous systems, and countless other domains, this represents the next frontier.

Recognition and Influence

Her standing is reflected in numerous accolades: election to the National Academy of Engineering, the National Academy of Medicine (2020), and the American Academy of Arts and Sciences (2021); the Intel Lifetime Achievement Innovation Award in 2023; and inclusion in Time magazine’s 100 Most Influential People in AI. She is colloquially known as the “Godmother of AI.” In 2023, she published a memoir, The Worlds I See: Curiosity, Exploration and Discovery at the Dawn of AI, which chronicles her personal journey and intellectual evolution.

Leading Theorists and Strategic Thinkers: The Landscape of AI and National Strategy

The backdrop to Dr. Li’s statement includes several strands of thought about technology, development, and national strategy:

Economic and Technological Diffusion

  • Erik Brynjolfsson and Andrew McAfee (The Second Machine Age, Machine Platform Crowd): These MIT researchers have articulated how technological revolutions create winners and losers, and that policy choices—not technology alone—determine whether gains are broadly shared. They underscore that without intentional intervention, automation and AI tend to concentrate wealth and opportunity.
  • Dani Rodrik (Harvard economist): Rodrik’s work on “premature demonetarisation” and structural transformation highlights the risks faced by developing economies when technological progress accelerates faster than institutions can adapt. His analysis supports Dr. Li’s argument: countries must actively build capacity or risk being left behind.
  • Mariana Mazzucato (University College London): Mazzucato’s research on the entrepreneurial state emphasises that breakthrough innovations—including AI—depend on public investment in foundational research, education, and infrastructure. Her work buttresses the case for public and private sector partnership.

Artificial Intelligence and Cognition

  • Geoffrey Hinton, Yann LeCun, and Yoshua Bengio: The triumvirate of deep learning pioneers recognised that neural networks could scale to superhuman performance in perception and pattern recognition, yet have increasingly stressed that current approaches may be insufficient for general intelligence. Their candour about limitations supports a measured, long-term investment view.
  • Stuart Russell (UC Berkeley): Russell has been a prominent voice calling for AI safety and governance frameworks to accompany capability development. His framing aligns with Dr. Li’s insistence that human-centred values must guide AI research and deployment.

Geopolitics and Technology Competition

  • Michael Mazarr (RAND Corporation): Mazarr and colleagues have analysed great-power competition in emerging technologies, emphasising that diffusion of capability is inevitable but the pace and terms of diffusion are contestable. Nations that invest in talent pipelines and partnerships will sustain influence; those that isolate will atrophy.
  • Kai-Fu Lee: The Taiwanese-American venture capitalist and author (AI Superpowers) has articulated how the US and China are in a competitive race, but also how smaller nations and regions can position themselves through strategic partnerships and focus on applied AI problems relevant to their economies.
  • Eric Schmidt (former Google CEO): Schmidt, who participated in the same FII panel as Dr. Li, has emphasised that geopolitical advantage flows to nations with capital markets, advanced chip fabrication (such as Taiwan’s TSMC), and deep talent pools. Yet he has also highlighted pathways for other nations to benefit through partnerships and focused investment in particular domains.

Human-Centred Technology and Inclusive Growth

  • Timnit Gebru and Joy Buolamwini: These AI ethics researchers have exposed how AI systems can perpetuate bias and harm marginalised communities. Their work reinforces Dr. Li’s emphasis on human-centred design and inclusive governance. For developing nations, this implies that AI investment must account for local contexts, values, and risks of exclusion.
  • Turing Award recipients and foundational figures (such as Barbara Liskov on systems reliability, and Leslie Valiant on learning theory): Their sustained emphasis on rigour, safety, and verifiability underpins the argument that sustainable AI development requires not just speed but also deep technical foundations—something that human capital investment cultivates.

Development Economics and Technology Transfer

  • Paul Romer (Nobel laureate): Romer’s work on endogenous growth emphasises that ideas and innovation are the drivers of long-term prosperity. For developing nations, this implies that investment in research capacity, education, and institutional learning—not merely adopting foreign technologies—is essential.
  • Ha-Joon Chang: The heterodox development economist has critiqued narratives of “leapfrogging” technology. His argument suggests that nations building indigenous technological ecosystems—through domestic investment in research, venture capital, and entrepreneurship—are more resilient and capable of adapting innovations to local needs.

The Three Pillars: An Unpacking

Dr. Li’s framework is sophisticated precisely because it avoids two traps: technological nationalism (the fantasy that any nation can independently build world-leading AI from scratch) and technological fatalism (the resignation that small and medium-sized economies cannot compete).

Human Capital

The most portable, scalable asset a nation can develop is talent. This encompasses:

  • Education pipelines: From primary through tertiary education, with emphasis on mathematics, computer science, and critical thinking
  • Research institutions: Universities, national laboratories, and research councils capable of contributing to fundamental and applied AI knowledge
  • Retention and diaspora engagement: Policies to keep talented individuals from emigrating, and mechanisms to attract expatriate expertise
  • Diversity and inclusion: As Dr. Li has emphasised through her co-founding of AI4ALL (a nonprofit working to increase diversity in AI), innovation benefits from diverse perspectives and draws from broader talent pools

Partnerships

Rather than isolating, Dr. Li advocates for strategic alignment:

  • North-South partnerships: Developed nations’ hyperscalers and technology firms partnering with developing economies to establish data centres, training programmes, and applied research initiatives. Saudi Arabia and the UAE have pursued this model successfully
  • South-South cooperation: Peer learning and knowledge exchange among developing nations facing similar challenges
  • Academic and research collaborations: Open-source tools, shared benchmarks (as exemplified by ImageNet), and collaborative research that diffuse capability globally
  • Technology licensing and transfer agreements: Mechanisms by which developing nations can access cutting-edge tools and methods at affordable terms

Technological Stack and Ecosystem

A nation cannot simply purchase AI capability; it must develop home-grown institutional and commercial ecosystems:

  • Open-source communities: Participation in and contribution to open-source AI frameworks (PyTorch, TensorFlow, Hugging Face) builds local expertise and reduces dependency on proprietary systems
  • Venture capital and startup ecosystems: Policies fostering entrepreneurship in AI applications suited to local economies (agriculture, healthcare, manufacturing)
  • Regulatory frameworks: Balanced approaches to data governance, privacy, and AI safety that neither stifle innovation nor endanger citizens
  • Domain-specific applied AI: Rather than competing globally in large language models, nations can focus on AI applications addressing pressing local challenges: medical diagnostics, precision agriculture, supply-chain optimisation, or financial inclusion

Why Inaction Is “Macroscopically the Wrong Thing”

Dr. Li’s assertion that not investing in AI would be fundamentally mistaken rests on several converging arguments:

Economic Imperatives

AI is reshaping productivity across sectors. Nations that fail to develop internal expertise will find themselves dependent on foreign technology, unable to adapt solutions to local contexts, and vulnerable to supply disruptions or geopolitical pressure. The competitive advantage flows to early movers and sustained investors.

Employment and Social Cohesion

While AI will displace some jobs, it will create others—particularly for workers skilled in AI-adjacent fields. Nations that invest in reskilling and education can harness these transitions productively. Those that do not risk deepening inequality and social fracture.

Sovereignty and Resilience

Over-reliance on foreign AI systems limits national agency. Whether in healthcare, defence, finance, or public administration, critical systems should rest partly on domestic expertise and infrastructure to ensure resilience and alignment with national values.

Participation in Global Governance

As AI governance frameworks emerge—whether through the UN, regional bodies, or multilateral forums—nations with substantive technical expertise and domestic stakes will shape the rules. Those without will have rules imposed upon them.

The Tension and Its Resolution

Implicit in Dr. Li’s statement is a tension worth articulating: the world cannot support 200 competing AI superpowers, each building independent foundational models. Capital and talent are finite. Yet neither is the world a binary of a few AI leaders and many followers. The resolution lies in specialisation and integration:

  • A nation may not lead in large language models but excel in robotics for agriculture
  • It may not build chips but pioneer AI applications in healthcare or education
  • It may not host hyperscaler data centres but contribute essential research in AI safety or fairness
  • It will necessarily depend on global partnerships whilst developing sovereign capacity in domains critical to its citizens

This is neither capitulation nor isolation, but rather a mature acceptance of global interdependence coupled with strategic autonomy in domains of national importance.

Conclusion: The Compass for National Strategy

Dr. Li’s counsel, grounded in decades of research leadership, industrial experience, and global perspective, offers a compass for policymakers navigating the AI era. Investment in human capital, strategic partnerships, and home-grown technological ecosystems is not a luxury or academic exercise—it is fundamental to national competitiveness, prosperity, and agency. The alternative—treating AI as an external force to be passively absorbed—is indeed “macroscopically” mistaken, foreclosing decades of economic opportunity and surrendering the right to shape how this powerful technology serves human flourishing.

read more
Quote: Dr. Fei-Fei Li – Stanford Professor – world-renowned authority in artificial intelligence

Quote: Dr. Fei-Fei Li – Stanford Professor – world-renowned authority in artificial intelligence

“I think robotics has a long way to go… I think the ability, the dexterity of human-level manipulation is something we have to wait a lot longer to get. ” – Dr. Fei-Fei Li – Stanford Professor – world-renowned authority in artificial intelligence

While AI has made dramatic progress in perception and reasoning, the physical manipulation and dexterity seen in human hands is far from being matched by machines.

Context of the Quote: The State and Limitations of Robotics

Dr. Li’s comment was made against the backdrop of accelerating investment and hype in artificial intelligence and robotics. While AI systems now master complex games, interpret medical scans, and facilitate large-scale automation, the field of robotics—especially with respect to dexterous manipulation and embodied interaction in the real world—remains restricted by hardware limitations, incomplete world models, and a lack of general adaptability.

  • Human dexterity involves fine motor control, real-time feedback, and a deep understanding of spatial and causal relationships. As Dr. Li emphasises, current robots struggle with tasks that are mundane for humans: folding laundry, pouring liquids, assembling diverse objects, or improvising repairs in unpredictable environments.
  • Even state-of-the-art robot arms and hands, controlled by advanced machine learning, manage select tasks in highly structured settings. Scaling to unconstrained, everyday environments has proven exceedingly difficult.
  • The launch of benchmarks such as the BEHAVIOR Challenge by Stanford, led by Dr. Li’s group, is a direct response to these limitations. The challenge simulates 1,000 everyday tasks across varied household environments, aiming to catalyse progress by publicly measuring how far the field is from truly general-purpose, dexterous robots.

Dr. Fei-Fei Li: Biography and Impact

Dr. Fei-Fei Li is a world-renowned authority in artificial intelligence, best known for foundational contributions to computer vision and the promotion of “human-centred AI”. Her career spans:

  • Academic Leadership: Professor of Computer Science at Stanford University; founding co-director of the Stanford Institute for Human-Centered AI (HAI).
  • ImageNet: Li created the ImageNet dataset, which transformed machine perception by enabling deep neural networks to outperform previous benchmarks and catalysed the modern AI revolution. This advance shaped progress in visual recognition, autonomous systems, and accessibility technologies.
  • Human-Centred Focus: Dr. Li is recognised for steering the field towards responsible, inclusive, and ethical AI, ensuring research aligns with societal needs and multidisciplinary perspectives.
  • Spatial Intelligence and Embodied AI: A core strand of her current work is in spatial intelligence—teaching machines to understand, reason about, and interact with the physical world with flexibility and safety. Her venture World Labs is pioneering this next frontier, aiming to bridge the gap from words to worlds.
  • Recognition: She was awarded the Queen Elizabeth Prize for Engineering in 2025—alongside fellow AI visionaries—honouring transformative contributions to computing, perception, and human-centred innovation.
  • Advocacy: Her advocacy spans diversity, education, and AI governance. She actively pushes for multidisciplinary, transparent approaches to technology that are supportive of human flourishing.

Theoretical Foundations and Leading Figures in Robotic Dexterity

The quest for human-level dexterity in machines draws on several fields—robotics, neuroscience, machine learning—and builds on the insights of leading theorists:

Name
Contributions
Relevance to Dexterity Problem
Rodney Brooks
Developed subsumption architecture for mobile robots; founded iRobot and Rethink Robotics
Emphasised embodied intelligence: physical interaction is central; argued autonomous robots must learn in the real world and adapt to uncertainty.
Yoshua Bengio, Geoffrey Hinton, Yann LeCun
Deep learning pioneers; applied neural networks to perception
Led the transformation in visual perception and sensorimotor learning; current work extends to robotic learning but recognises that perception alone is insufficient for dexterity.
Pieter Abbeel
Expert in reinforcement learning and robotics (UC Berkeley)
Advanced algorithms for robotic manipulation, learning from demonstration, and real-world transfer; candid about the gulf between lab demonstrations and robust household robots.
Jean Ponce, Dieter Fox, Ken Goldberg
Leading researchers in computer vision and robot manipulation
Developed grasping algorithms and modelling for manipulation, but acknowledge that even “solved” tasks in simulation often fail in the unpredictable real world.
Dr. Fei-Fei Li
Computer vision, spatial intelligence, embodied AI
Argues spatial understanding and physical intelligence are critical, and that world models must integrate perception, action, and context to approach human-level dexterity.
Demis Hassabis
DeepMind CEO; led breakthroughs in deep reinforcement learning
AlphaZero and related systems have shown narrow superhuman performance, but the physical control and manipulation necessary for robotics remains unsolved.
Chris Atkeson
Humanoid and soft robotics pioneer
Developed advanced dexterous hands and whole-body motion, but highlights the vast gap between the best machines and human adaptability.

The Challenge: Why Robotics Remains “a Long Way to Go”

  • Embodiment: Unlike pure software, robots operate under real-world physical constraints. Variability in object geometry, materials, lighting, and external force must be mastered for consistent human-like manipulation.
  • Generalisation: A robot that succeeds at one task often fails catastrophically at another, even if superficially similar. Human hands, with sensory feedback and innate flexibility, effortlessly adapt.
  • World Modelling: Spatial intelligence—anticipating the consequences of actions, integrating visual, tactile, and proprioceptive data—is still largely unsolved. As Dr. Li notes, machines must “understand, navigate, and interact” with complex, dynamic environments.
  • Benchmarks and Community Efforts: The BEHAVIOR Challenge and open-source simulators aim to provide transparent, rigorous measurement and accelerate community progress, but there is consensus that true general dexterity is likely years—if not decades—away.

Conclusion: Where Theory Meets Practice

While AI and robotics have delivered astonishing advances in perception, narrowly focused automation, and simulation, the dexterity, adaptability, and common-sense reasoning required for robust, human-level robotic manipulation remain an unsolved grand challenge. Dr. Fei-Fei Li’s work and leadership define the state of the art—and set the aspirational vision for the next wave: embodied, spatially conscious AI, built with a profound respect for the complexity of human life and capability. Those who follow in her footsteps, across academia and industry, measure their progress not against hype or isolated demonstrations, but against the demanding reality of everyday human tasks.

read more
Quote: Dr. Fei-Fei Li – Stanford Professor – world-renowned authority in artificial intelligence

Quote: Dr. Fei-Fei Li – Stanford Professor – world-renowned authority in artificial intelligence

“That ability that humans have, it’s the combination of creativity and abstraction. I do not see today’s AI or tomorrow’s AI being able to do that yet.” – Dr. Fei-Fei Li – Stanford Professor – world-renowned authority in artificial intelligence

Dr. Li’s statement came amid wide speculation about the near-term prospects for artificial general intelligence (AGI) and superintelligence. While current AI already exceeds human capacity in specific domains (such as language translation, memory recall, and vast-scale data analysis), Dr. Li draws a line at creative abstraction—the human ability to form new concepts and theories that radically change our understanding of the world. She underscores that, despite immense data and computational resources, AI does not demonstrate the generative leap that allowed Newton to discover classical mechanics or Einstein to reshape physics with relativity. Dr. Li insists that, absent fundamental conceptual breakthroughs, neither today’s nor tomorrow’s AI can replicate this synthesis of creativity and abstract reasoning.

About Dr. Fei-Fei Li

Dr. Fei-Fei Li holds the title of Sequoia Capital Professor of Computer Science at Stanford University and is a world-renowned authority in artificial intelligence, particularly in computer vision and human-centric AI. She is best known for creating ImageNet, the dataset that triggered the deep learning revolution in computer vision—a cornerstone of modern AI systems. As the founding co-director of Stanford’s Institute for Human-Centered Artificial Intelligence (HAI), Dr. Li has consistently championed the need for AI that advances, rather than diminishes, human dignity and agency. Her research, with over 400 scientific publications, has pioneered new frontiers in machine learning, neuroscience, and their intersection.

Her leadership extends beyond academia: she served as chief scientist of AI/ML at Google Cloud, sits on international boards, and is deeply engaged in policy, notably as a special adviser to the UN. Dr. Li is acclaimed for her advocacy in AI ethics and diversity, notably co-founding AI4ALL, a non-profit enabling broader participation in the AI field. Often described as the “godmother of AI,” she is an elected member of the US National Academy of Engineering and the National Academy of Medicine. Her personal journey—from emigrating from Chengdu, China, to supporting her parents’ small business in New Jersey, to her trailblazing career—is detailed in her acclaimed 2023 memoir, The Worlds I See.

Remarks on Creativity, Abstraction, and AI: Theoretical Roots

The distinction Li draws—between algorithmic pattern-matching and genuine creative abstraction—addresses a foundational question in AI: What constitutes intelligence, and is it replicable in machines? This theme resonates through the works of several canonical theorists:

  • Alan Turing (1912–1954): Regarded as the father of computer science, Turing posed the question of machine intelligence in his pivotal 1950 paper, “Computing Machinery and Intelligence”. He proposed what we call the Turing Test: if a machine could converse indistinguishably from a human, could it be deemed intelligent? Turing hinted at the limits but also the theoretical possibility of machine abstraction.
  • Herbert Simon and Allen Newell: Pioneers of early “symbolic AI”, Simon and Newell framed intelligence as symbol manipulation; their experiments (the Logic Theorist and General Problem Solver) made some progress in abstract reasoning but found creative leaps elusive.
  • Marvin Minsky (1927–2016): Co-founder of the MIT AI Lab, Minsky believed creativity could in principle be mechanised, but anticipated it would require complex architectures that integrate many types of knowledge. His work, especially The Society of Mind, remained vital but speculative.
  • John McCarthy (1927–2011): While he named the field “artificial intelligence” and developed the LISP programming language, McCarthy was cautious about claims of broad machine creativity, viewing abstraction as an open challenge.
  • Geoffrey Hinton, Yann LeCun, Yoshua Bengio: Fathers of deep learning, these researchers demonstrated that neural networks can match or surpass humans in perception and narrow problem-solving but have themselves highlighted the gap between statistical learning and the ingenuity seen in human discovery.
  • Nick Bostrom: In Superintelligence (2014), Bostrom analysed risks and trajectories for machine intelligence exceeding humans, but acknowledged that qualitative leaps in creativity—paradigm shifts, theory building—remain a core uncertainty.
  • Gary Marcus: An outspoken critic of current AI, Marcus argues that without genuine causal reasoning and abstract knowledge, current models (including the most advanced deep learning systems) are far from truly creative intelligence.

Synthesis and Current Debates

Across these traditions, a consistent theme emerges: while AI has achieved superhuman accuracy, speed, and recall in structured domains, genuine creativity—the ability to abstract from prior knowledge to new paradigms—is still uniquely human. Dr. Fei-Fei Li, by foregrounding this distinction, not only situates herself within this lineage but also aligns her ongoing research on “large world models” with an explicit goal: to design AI tools that augment—but do not seek to supplant—human creative reasoning and abstract thought.

Her caution, rooted in both technical expertise and a broader philosophical perspective, stands as a rare check on techno-optimism. It articulates the stakes: as machine intelligence accelerates, the need to centre human capabilities, dignity, and judgement—especially in creativity and abstraction—becomes not just prudent but essential for responsibly shaping our shared future.

read more
Quote: Dr Eric Schmidt – Ex-Google CEO

Quote: Dr Eric Schmidt – Ex-Google CEO

“I worry a lot about … Africa. And the reason is: how does Africa benefit from [AI]? There’s obviously some benefit of globalisation, better crop yields, and so forth. But without stable governments, strong universities, major industrial structures – which Africa, with some exceptions, lacks – it’s going to lag.” – Dr Eric Schmidt – Former Google CEO

Dr Eric Schmidt’s observation stems from his experience at the highest levels of the global technology sector and his acute awareness of both the promise and the precariousness of the coming AI age. His warning about Africa’s risk of lagging in AI adoption and benefit is rooted in today’s uneven technological landscape and long-standing structural challenges facing the continent.

About Dr Eric Schmidt

Dr Eric Schmidt is one of the most influential technology executives of the 21st century. As CEO of Google from 2001 to 2011, he oversaw Google’s transformation from a Silicon Valley start-up into a global technology leader. Schmidt provided the managerial and strategic backbone that enabled Google’s explosive growth, product diversification, and a culture of robust innovation. After Google, he continued as Executive Chairman and Technical Advisor through Google’s restructuring into Alphabet, before transitioning to philanthropic and strategic advisory work. Notably, Schmidt has played significant roles in US national technology strategy, chairing the US National Security Commission on Artificial Intelligence and founding the bipartisan Special Competitive Studies Project, which advises on the intersections of AI, security, and economic competitiveness.

With a background encompassing leading roles at Sun Microsystems, Novell, and advisory positions at Xerox PARC and Bell Labs, Schmidt’s career reflects deep immersion in technology and innovation. He is widely regarded as a strategic thinker on the global opportunities and risks of technology, regularly offering perspective on how AI, digital infrastructure, and national competitiveness are shaping the future economic order.

Context of the Quotation

Schmidt’s remark appeared during a high-level panel at the Future Investment Initiative (FII9), in conversation with Dr Fei-Fei Li of Stanford and Peter Diamandis. The discussion centred on “What Happens When Digital Superintelligence Arrives?” and explored the likely economic, social, and geopolitical consequences of rapid AI advancement.

In this context, Schmidt identified a core risk: that AI’s benefits will accrue unevenly across borders, amplifying existing inequalities. He emphasised that while powerful AI tools may drive exceptional economic value and efficiencies—potentially in the trillions of dollars—these gains are concentrated by network effects, investment, and infrastructure. Schmidt singled out Africa as particularly vulnerable: absent stable governance, strong research universities, or robust industrial platforms—critical prerequisites for technology absorption—Africa faces the prospect of deepening relative underdevelopment as the AI era accelerates. The comment reflects a broader worry in technology and policy circles: global digitisation is likely to amplify rather than repair structural divides unless deliberate action is taken.

Leading Theorists and Thinking on the Subject

The dynamics Schmidt describes are at the heart of an emerging literature on the “AI divide,” digital colonialism, and the geopolitics of AI. Prominent thinkers in these debates include:

  • Professor Fei-Fei Li
    A leading AI scientist, Dr Li has consistently framed AI’s potential as contingent on human-centred design and equitable access. She highlights the distinction between the democratisation of access (e.g., cheaper healthcare or education via AI) and actual shared prosperity—which hinges on local capacity, policy, and governance. Her work underlines that technical progress does not automatically result in inclusive benefit, validating Schmidt’s concerns.
  • Kate Crawford and Timnit Gebru
    Both have written extensively on the risks of algorithmic exclusion, surveillance, and the concentration of AI expertise within a handful of countries and firms. In particular, Crawford’s Atlas of AI and Gebru’s leadership in AI ethics foreground how global AI development mirrors deeper resource and power imbalances.
  • Nick Bostrom and Stuart Russell
    Their theoretical contributions address the broader existential and ethical challenges of artificial superintelligence, but they also underscore risks of centralised AI power—technically and economically.
  • Ndubuisi Ekekwe, Bitange Ndemo, and Nanjira Sambuli
    These African thought leaders and scholars examine how Africa can leapfrog in digital adoption but caution that profound barriers—structural, institutional, and educational—must be addressed for the continent to benefit from AI at scale.
  • Eric Schmidt himself has become a touchstone in policy/tech strategy circles, having co-chaired the US National Security Commission on Artificial Intelligence. The Commission’s reports warned of a bifurcated world where AI capabilities—and thus economic and security advantages—are ever more concentrated.

Structural Elements Behind the Quote

Schmidt’s remark draws attention to a convergence of factors:

  • Institutional robustness
    Long-term AI prosperity requires stable governments, responsive regulatory environments, and a track record of supporting investment and innovation. This is lacking in many, though not all, of Africa’s economies.
  • Strong universities and research ecosystems
    AI innovation is talent- and research-intensive. Weak university networks limit both the creation and absorption of advanced technologies.
  • Industrial and technological infrastructure
    A mature industrial base enables countries and companies to adapt AI for local benefit. The absence of such infrastructure often results in passive consumption of foreign technology, forgoing participation in value creation.
  • Network effects and tech realpolitik
    Advanced AI tools, data centres, and large-scale compute power are disproportionately located in a few advanced economies. The ability to partner with these “hyperscalers”—primarily in the US—shapes national advantage. Schmidt argues that regions which fail to make strategic investments or partnerships risk being left further behind.

Summary

Schmidt’s statement is not simply a technical observation but an acute geopolitical and developmental warning. It reflects current global realities where AI’s arrival promises vast rewards, but only for those with the foundational economic, political, and intellectual capital in place. For policy makers, investors, and researchers, the implication is clear: bridging the digital-structural gap requires not only technology transfer but also building resilient, adaptive institutions and talent pipelines that are locally grounded.

read more
Quote: Trevor McCourt – Extropic CTO

Quote: Trevor McCourt – Extropic CTO

“We need something like 10 terawatts in the next 20 years to make LLM systems truly useful to everyone… Nvidia would need to 100× output… You basically need to fill Nevada with solar panels to provide 10 terawatts of power, at a cost around the world’s GDP. Totally crazy.” – Trevor McCourt – Extropic CTO

Trevor McCourt, Chief Technology Officer and co-founder of Extropic, has emerged as a leading voice articulating a paradox at the heart of artificial intelligence advancement: the technology that promises to democratise intelligence across the planet may, in fact, be fundamentally unscalable using conventional infrastructure. His observation about the terawatt imperative captures this tension with stark clarity—a reality increasingly difficult to dismiss as speculative.

Who Trevor McCourt Is

McCourt brings a rare convergence of disciplinary expertise to his role. Trained in mechanical engineering at the University of Waterloo (graduating 2015) and holding advanced credentials from the Massachusetts Institute of Technology (2020), he combines rigorous physical intuition with deep software systems architecture. Prior to co-founding Extropic, McCourt worked as a Principal Software Engineer, establishing a track record of delivering infrastructure at scale: he designed microservices-based cloud platforms that improved deployment speed by 40% whilst reducing operational costs by 30%, co-invented a patented dynamic caching algorithm for distributed systems, and led open-source initiatives that garnered over 500 GitHub contributors.

This background—spanning mechanical systems, quantum computation, backend infrastructure, and data engineering—positions McCourt uniquely to diagnose what others in the AI space have overlooked: that energy is not merely a cost line item but a binding physical constraint on AI’s future deployment model.

Extropic, which McCourt co-founded alongside Guillaume Verdon (formerly a quantum technology lead at Alphabet’s X division), closed a $14.1 million Series Seed funding round in 2023, led by Kindred Ventures and backed by institutional investors including Buckley Ventures, HOF Capital, and OSS Capital. The company now stands at approximately 15 people distributed across integrated circuit design, statistical physics research, and machine learning—a lean team assembled to pursue what McCourt characterises as a paradigm shift in compute architecture.

The Quote in Strategic Context

McCourt’s assertion that “10 terawatts in the next 20 years” is required for universal LLM deployment, coupled with his observation that this would demand filling Nevada with solar panels at a cost approaching global GDP, represents far more than rhetorical flourish. It is the product of methodical back-of-the-envelope engineering calculation.

His reasoning unfolds as follows:

From Today’s Baseline to Mass Deployment:
A text-based assistant operating at today’s reasoning capability (approximating GPT-5-Pro performance) deployed to every person globally would consume roughly 20% of the current US electrical grid—approximately 100 gigawatts. This is not theoretical; McCourt derives this from first principles: transformer models consume roughly 2 × (parameters × tokens) floating-point operations; modern accelerators like Nvidia’s H100 operate at approximately 0.7 picojoules per FLOP; population-scale deployment implies continuous, always-on inference at scale.

Adding Modalities and Reasoning:
Upgrade that assistant to include video capability at just 1 frame per second (envisioning Meta-style augmented-reality glasses worn by billions), and the grid requirement multiplies by approximately 10×. Enhance the reasoning capability to match models working on the ARC AGI benchmark—problems of human-level reasoning difficulty—and the text assistant alone requires a 10× expansion: 5 terawatts. Push further to expert-level systems capable of solving International Mathematical Olympiad problems, and the requirement reaches 100× the current grid.

Economic Impossibility:
A single gigawatt data centre costs approximately $10 billion to construct. The infrastructure required for mass-market AI deployment rapidly enters the hundreds of trillions of dollars—approaching or exceeding global GDP. Nvidia’s current manufacturing capacity would itself require a 100-fold increase to support even McCourt’s more modest scenarios.

Physical Reality Check:
Over the past 75 years, US grid capacity has grown remarkably consistently—a nearly linear expansion. Sam Altman’s public commitment to building one gigawatt of data centre capacity per week alone would require 3–5× the historical rate of grid growth. Credible plans for mass-market AI acceleration push this requirement into the terawatt range over two decades—a rate of infrastructure expansion that is not merely economically daunting but potentially physically impossible given resource constraints, construction timelines, and raw materials availability.

McCourt’s conclusion: the energy path is not simply expensive; it is economically and physically untenable. The paradigm must change.

Intellectual Foundations: Leading Theorists in Energy-Efficient Computing and Probabilistic AI

Understanding McCourt’s position requires engagement with the broader intellectual landscape that has shaped thinking about computing’s physical limits and probabilistic approaches to machine learning.

Geoffrey Hinton—Pioneering Energy-Based Models and Probabilistic Foundations:
Few figures loom larger in the theoretical background to Extropic’s work than Geoffrey Hinton. Decades before the deep learning boom, Hinton developed foundational theory around Boltzmann machines and energy-based models (EBMs)—the conceptual framework that treats learning as the discovery and inference of complex probability distributions. His work posits that machine learning, at its essence, is about fitting a probability distribution to observed data and then sampling from it to generate new instances consistent with that distribution. Hinton’s recognition with the 2023 Nobel Prize in Physics for “foundational discoveries and inventions that enable machine learning with artificial neural networks” reflects the deep prescience of this probabilistic worldview. More than theoretical elegance, this framework points toward an alternative computational paradigm: rather than spending vast resources on deterministic matrix operations (the GPU model), a system optimised for efficient sampling from complex distributions would align computation with the statistical nature of intelligence itself.

Michael Frank—Physics of Reversible and Adiabatic Computing:
Michael Frank, a senior scientist now at Vaire (a near-zero-energy chip company), has spent decades at the intersection of physics and computing. His research programme, initiated at MIT in the 1990s and continued at the University of Florida, Florida State, and Sandia National Laboratories, focuses on reversible computing and adiabatic CMOS—techniques aimed at reducing the fundamental energy cost of information processing. Frank’s work addresses a deep truth: in conventional digital logic, information erasure is thermodynamically irreversible and expensive, dissipating energy as heat. By contrast, reversible computing minimises such erasure, thereby approaching theoretical energy limits set by physics rather than by engineering convention. Whilst Frank’s trajectory and Extropic’s diverge in architectural detail, both share the conviction that energy efficiency must be rooted in physical first principles, not merely in engineering optimisation of existing paradigms.

Yoshua Bengio and Chris Bishop—Probabilistic Learning Theory:
Leading researchers in deep generative modelling—including Bengio, Bishop, and others—have consistently advocated for probabilistic frameworks as foundational to machine learning. Their work on diffusion models, variational inference, and sampling-based approaches has legitimised the view that efficient inference is not about raw compute speed but about statistical appropriateness. This theoretical lineage underpins the algorithmic choices at Extropic: energy-based models and denoising thermodynamic models are not novel inventions but rather a return to first principles, informed by decades of probabilistic ML research.

Richard Feynman—Foundational Physics of Computing:
Though less directly cited in contemporary AI discourse, Feynman’s 1982 lectures on the physics of computation remain conceptually foundational. Feynman observed that computation’s energy cost is ultimately governed by physical law, not engineering ingenuity alone. His observations on reversibility and the thermodynamic cost of irreversible operations informed the entire reversible-computing movement and, by extension, contemporary efforts to align computation with physics rather than against it.

Contemporary Systems Thinkers (Sam Altman, Jensen Huang):
Counterintuitively, McCourt’s critique is sharpened by engagement with the visionary statements of industry leaders who have perhaps underestimated energy constraints. Altman’s commitment to building one gigawatt of data centre capacity per week, and Huang’s roadmaps for continued GPU scaling, have inadvertently validated McCourt’s concern: even the most optimistic industrial plans require infrastructure expansion at rates that collide with physical reality. McCourt uses their own projections as evidence for the necessity of paradigm change.

The Broader Strategic Narrative

McCourt’s remarks must be understood within a convergence of intellectual and practical pressures:

The Efficiency Plateau:
Digital logic efficiency, measured as energy per operation, has stalled. Transistor capacitance plateaued around the 10-nanometre node; operating voltage is thermodynamically bounded near 300 millivolts. Architectural optimisations (quantisation, sparsity, tensor cores) improve throughput but do not overcome these physical barriers. The era of “free lunch” efficiency gains from Moore’s Law miniaturisation has ended.

Model Complexity Trajectory:
Whilst small models have improved at fixed benchmarks, frontier AI systems—those solving novel, difficult problems—continue to demand exponentially more compute. AlphaGo required ~1 exaFLOP per game; AlphaCode required ~100 exaFLOPs per coding problem; the system solving International Mathematical Olympiad problems required ~100,000 exaFLOPs. Model miniaturisation is not offsetting capability ambitions.

Market Economics:
The AI market has attracted trillions in capital precisely because the economic potential is genuine and vast. Yet this same vastness creates the energy paradox: truly universal AI deployment would consume resources incompatible with global infrastructure and economics. The contradiction is not marginal; it is structural.

Extropic’s Alternative:
Extropic proposes to escape this local minimum through radical architectural redesign. Thermodynamic Sampling Units (TSUs)—circuits architected as arrays of probabilistic sampling cells rather than multiply-accumulate units—would natively perform the statistical operations that diffusion and generative AI models require. Early simulations suggest energy efficiency improvements of 10,000× on simple benchmarks compared to GPU-based approaches. Hybrid algorithms combining TSUs with compact neural networks on conventional hardware could deliver intermediate gains whilst establishing a pathway toward a fundamentally different compute paradigm.

Why This Matters Now

The quote’s urgency reflects a dawning recognition across technical and policy circles that energy is not a peripheral constraint but the central bottleneck determining AI’s future trajectory. The choice, as McCourt frames it, is stark: either invest in a radically new architecture, or accept that mass-market AI remains perpetually out of reach—a luxury good confined to the wealthy and powerful rather than a technology accessible to humanity.

This is not mere speculation or provocation. It is engineering analysis grounded in physics, economics, and historical precedent, articulated by someone with the technical depth to understand both the problem and the extraordinary difficulty of solving it.

read more
Quote: Stephen Schwartzman – Blackstone Founder

Quote: Stephen Schwartzman – Blackstone Founder

“You have to be very gentle around people. If you’re in a leadership position, people hear your words amplified. You have to be very careful what you say and how you say it. You always have to listen to what other people have to say. I genuinely want to know what everybody else thinks.” – Stephen Schwarzman – Blackstone Founder

“You have to be very gentle around people. If you’re in a leadership position, people hear your words amplified. You have to be very careful what you say and how you say it. You always have to listen to what other people have to say. I genuinely want to know what everybody else thinks.” – Stephen Schwarzman – Blackstone Founder

Stephen A. Schwarzman’s quote on gentle, thoughtful leadership encapsulates decades spent at the helm of Blackstone—the world’s largest alternative asset manager—where he forged a distinctive culture and process rooted in careful listening, respectful debate, humility, and operational excellence. The story behind this philosophy is marked by formative setbacks, institutional learning, and the broader evolution of modern leadership theory.

Stephen Schwarzman: Background and Significance

Stephen A. Schwarzman, born in 1947 in Philadelphia, rose to prominence after co-founding Blackstone in 1985 with Pete Peterson. Initially, private markets comprised a tiny fraction of institutional portfolios; under his stewardship, allocations in private assets have grown exponentially, fundamentally reshaping global investing. Schwarzman is renowned for his relentless pursuit of operational improvement, risk discipline, and market timing—his mantra, “Don’t lose money,” is enforced by multi-layered approval and rigorous debate.

Schwarzman’s experience as a leader is deeply shaped by early missteps. The Edgecomb Steel investment loss was pivotal: it catalyzed Blackstone’s institutionalized investment committees, de-risking debates, and a culture where anyone may challenge ideas so long as discussion remains fact-based and impersonal. This setback taught him accountability, humility, and the value of systemic learning—his response was not to retreat from risk, but to build a repeatable, challenge-driven process. Crucially, he narrates his own growth from a self-described “C or D executive” to a leader who values gentleness, clarity, humor, and private critique—understanding that words uttered from the top echo powerfully and can shape (or harm) culture.

Beyond technical accomplishments, Schwarzman’s legacy is one of building enduring institutions through codified values: integrity, decency, and hard work. His leadership maxim—“be gentle, clear, and high standard; always listen”—is a template for strong cultures, high performance, and sustainable growth.

The Context of the Quote

The quoted passage emerges from Schwarzman’s reflections on leadership lessons acquired over four decades. Known for candid self-assessment, he openly admits to early struggles with management style but evolved to prioritize humility, care, and active listening. At Blackstone, this meant never criticizing staff in public and always seeking divergent views to inform decisions. He emphasizes that a leader’s words carry amplified weight among teams and stakeholders; thus, intentional communication and genuine listening are essential for nurturing an environment of trust, engagement, and intelligent risk-taking.

This context is inseparable from Blackstone’s broader organizational playbook: institutionalized judgment, structured challenge, and brand-centered culture—all designed to accumulate wisdom, avoid repeating mistakes, and compound long-term value. Schwarzman’s leadership pathway is a case study in the power of personal evolution, open dialogue, and codified norms that outlast the founder himself.

Leading Theorists and Historical Foundations

Schwarzman’s leadership philosophy is broadly aligned with a lineage of thinkers who have shaped modern approaches to management, organizational behavior, and culture:

  • Peter Drucker: Often called the “father of modern management,” Drucker stressed that leadership is defined by results and relationships, not positional power. His work emphasized listening, empowering employees, and the ethical responsibility of those at the top.

  • Warren Bennis: Bennis advanced concepts of authentic leadership, self-awareness, and transparency. He argued that leaders should be vulnerable, model humility, and act as facilitators of collective intelligence rather than commanders.

  • Jim Collins: In “Good to Great,” Collins describes “Level 5 Leaders” as those who combine professional will with personal humility. Collins underscores that amplifying diverse viewpoints and creating cultures of disciplined debate lead to enduring success.

  • Edgar Schein: Schein’s studies of organizational culture reveal that leaders not only set behavioral norms through their actions and words but also shape “cultural DNA” by embedding values of learning, dialogue, and respect.

  • Amy Edmondson: Her pioneering work in psychological safety demonstrates that gentle leadership—rooted in listening and respect—fosters environments where people can challenge ideas, raise concerns, and innovate without fear.

Each of these theorists contributed to the understanding that gentle, attentive leadership is not weakness, but a source of institutional strength, resilience, and competitive advantage. Their concepts mirror the systems at Blackstone: open challenge, private correction, and leadership by example.

Schwarzman’s Distinction and Industry Impact

Schwarzman’s practice stands out in several ways. He institutionalized lessons from mistakes to create robust decision processes and a genuine challenge culture. His insistence on brand-building as strategy—where every decision, hire, and visual artifact reinforces trust—reflects an awareness of the symbolic weight of leadership. Under his guidance, Blackstone’s transformation from a two-person startup into a global giant offers a living illustration of how values, process, and leadership style drive superior, sustainable outcomes.

In summary, the quoted insight is not platitude, but hard-won experience from a legendary founder whose methods echo the best modern thinking on leadership, learning, and organizational resilience. The theorists tracing this journey—from Drucker to Edmondson—affirm that the path to “enduring greatness” lies in gentle authority, careful listening, institutionalized memory, and the humility to learn from every setback.

read more
Quote: Stephen Schwartzman – Blackstone Founder

Quote: Stephen Schwartzman – Blackstone Founder

“I always felt that somebody was only capable of one super effort to create something that can really be consequential. There are so many impediments to being successful. If you’re on the field, you’re there to win, and to win requires an enormous amount of practice – pushing yourself really to the breaking point.” – Stephen Schwarzman – Blackstone Founder

Stephen A. Schwarzman is a defining figure in global finance and alternative investments. He is Chairman, CEO, and Co-Founder of Blackstone, the world’s largest alternative investment firm, overseeing over $1.2 trillion in assets.

Backstory and Context of the Quote

Stephen Schwarzman’s perspective on effort, practice, and success is rooted in over four decades building Blackstone from a two-person start-up to an institution that has shaped capital markets worldwide. The referenced quote captures his philosophy: that achieving anything truly consequential demands a singular, maximal effort—a philosophy he practised as Blackstone’s founder and architect.

Schwarzman began his career in mergers and acquisitions at Lehman Brothers in the 1970s, where he met Peter G. Peterson. Their complementary backgrounds—a combination of strategic vision and operational drive—empowered them to establish Blackstone in 1985, initially with just $400,000 in seed capital and a big ambition to build a differentiated investment firm. The mid-1980s financial environment, marked by booming M&A activity, provided fertile ground for innovation in buyouts and private markets.

From the outset, Schwarzman instilled a culture of rigorous preparation and discipline. A landmark early setback—the unsuccessful investment in Edgecomb Steel—became a pivotal learning event. It led Schwarzman to institutionalise robust investment committees, open and adversarial (yet respectful) debate, and a relentless process of due diligence. This learning loop, focused on not losing money and fact-based challenge culture, shaped Blackstone’s internal systems and risk culture for decades to come.

His attitude to practice, perseverance, and operating at the limit is not merely rhetorical—it is Blackstone’s operational model: selecting complex assets, professionalising management, and adding value through operational transformation before timing exits for maximum advantage. The company’s strict approval layers, multi-stage risk screening, and exacting standards demonstrate Schwarzman’s belief that only by pushing to the limits of endurance—and addressing every potential weakness—can lasting value be created.

In his own words, Schwarzman attributes success not to innate brilliance but to grit, repetition, and the ability to learn from failure. This is underscored by his leadership style, which evolved towards being gentle, clear, and principled, setting high standards while building an enduring culture based on integrity, decency, and open debate.

About Stephen A. Schwarzman

  • Born in 1947 in Philadelphia, Schwarzman studied at Yale University (where he was a member of Skull and Bones) and earned an MBA from Harvard Business School.
  • Blackstone, which he co-founded in 1985, began as an M&A boutique and now operates across private equity, real estate, credit, hedge funds, infrastructure, and life sciences, making it a recognised leader in global investment management.
  • Under Schwarzman’s leadership, Blackstone institutionalised patient, active ownership—acquiring, improving, and timing the exit from portfolio companies for optimal results while actively shaping industry standards in governance and risk management.
  • He is also known for his philanthropy, having signed The Giving Pledge and contributed significantly to education, arts, and culture.
  • His autobiography, What It Takes: Lessons in the Pursuit of Excellence, distils the philosophy underpinning his business and personal success.
  • Schwarzman’s role as a public intellectual and advisor has seen him listed among the “World’s Most Powerful People” and “Time 100 Most Influential People”.

Leading Theorists and Intellectual Currents Related to the Quote

The themes embodied in Schwarzman’s philosophy—singular effort, practice to breaking point, coping with setbacks, and building institutional culture—draw on and intersect with several influential theorists and schools of thought in management and the psychology of high achievement:

  • Anders Ericsson (Deliberate Practice): Ericsson’s research underscores that deliberate practice—extended, focused effort with ongoing feedback—is critical to acquiring expert performance in any field. Schwarzman’s stress on “enormous amount of practice” parallels Ericsson’s findings that natural talent is far less important than methodical, sustained effort.
  • Angela Duckworth (Grit): Duckworth’s work on “grit” emphasises passion and perseverance for long-term goals as key predictors of success. Her research supports Schwarzman’s belief that breaking through obstacles—and continuing after setbacks—is fundamental for consequential achievement.
  • Carol Dweck (Growth Mindset): Dweck demonstrated that embracing a “growth mindset”—seeing failures as opportunities to learn rather than as endpoints—fosters resilience and continuous improvement. Schwarzman’s approach to institutionalising learning from failure at Blackstone reflects this theoretical foundation.
  • Peter Drucker (Management by Objectives and Institutional Culture): Drucker highlighted the importance of clear organisational goals, continuous learning, and leadership by values for building enduring institutions. Schwarzman’s insistence on codifying culture, open debate, and aligning every decision with the brand reflects Drucker’s emphasis on the importance of system and culture in organisational performance.
  • Jim Collins (Built to Last, Good to Great): Collins’ research into successful companies found a common thread of fanatical discipline, a culture of humility and rigorous debate, all driven by a sense of purpose. These elements are present throughout Blackstone’s governance model and leadership ethos as steered by Schwarzman.
  • Michael Porter (Competitive Strategy): Porter’s concept of sustained competitive advantage through unique positioning and strategic differentiation is echoed in Blackstone’s approach—actively improving operations rather than simply relying on market exposure, and committing to ‘winning’ through operational and structural edge.

Summary

Schwarzman’s quote is not only a personal reflection but also a distillation of enduring principles in high achievement and institutional leadership. It is the lived experience of building Blackstone—a case study in dedication, resilience, and the institutionalisation of excellence. His story, and the theoretical underpinnings echoed in his approach, provide a template for excellence and consequence in any field marked by complexity, competition, and the need for sustained, high-conviction effort.

read more
Quote: Trevor McCourt – Extropic CTO

Quote: Trevor McCourt – Extropic CTO

“If you upgrade that assistant to see video at 1 FPS – think Meta’s glasses… you’d need to roughly 10× the grid to accommodate that for everyone. If you upgrade the text assistant to reason at the level of models working on the ARC AGI benchmark… even just the text assistant would require around a 10× of today’s grid.” – Trevor McCourt – Extropic CTO

The quoted remark by Trevor McCourt, CTO of Extropic, underscores a crucial bottleneck in artificial intelligence scaling: energy consumption outpaces technological progress in compute efficiency, threatening the viability of universal, always-on AI. The quote translates hard technical extrapolation into plain language—projecting that if every person were to have a vision-capable assistant running at just 1 video frame per second, or if text models achieved a level of reasoning comparable to ARC AGI benchmarks, global energy infrastructure would need to multiply several times over, amounting to many terawatts—figures that quickly reach into economic and physical absurdity.

Backstory and Context of the Quote & Trevor McCourt

Trevor McCourt is the co-founder and Chief Technology Officer of Extropic, a pioneering company targeting the energy barrier limiting mass-market AI deployment. With multidisciplinary roots—a blend of mechanical engineering and quantum programming, honed at the University of Waterloo and Massachusetts Institute of Technology—McCourt contributed to projects at Google before moving to the hardware-software frontier. His leadership at Extropic is defined by a willingness to challenge orthodoxy and champion a first-principles, physics-driven approach to AI compute architecture.

The quote arises from a keynote on how present-day large language models and diffusion AI models are fundamentally energy-bound. McCourt’s analysis is rooted in practical engineering, economic realism, and deep technical awareness: the computational demands of state-of-the-art assistants vastly outstrip what today’s grid can provide if deployed at population scale. This is not merely an engineering or machine learning problem, but a macroeconomic and geopolitical dilemma.

Extropic proposes to address this impasse with Thermodynamic Sampling Units (TSUs)—a new silicon compute primitive designed to natively perform probabilistic inference, consuming orders of magnitude less power than GPU-based digital logic. Here, McCourt follows the direction set by energy-based probabilistic models and advances it both in hardware and algorithm.

McCourt’s career has been defined by innovation at the technical edge: microservices in cloud environments, patented improvements to dynamic caching in distributed systems, and research in scalable backend infrastructure. This breadth, from academic research to commercial deployment, enables his holistic critique of the GPU-centred AI paradigm, as well as his leadership at Extropic’s deep technology startup.

Leading Theorists & Influencers in the Subject

Several waves of theory and practice converge in McCourt’s and Extropic’s work:

1. Geoffrey Hinton (Energy-Based and Probabilistic Models):
Long before deep learning’s mainstream embrace, Hinton’s foundational work on Boltzmann machines and energy-based models explored the idea of learning and inference as sampling from complex probability distributions. These early probabilistic paradigms anticipated both the difficulties of scaling and the algorithmic challenges that underlie today’s generative models. Hinton’s recognition—including the Nobel Prize for work on energy-based models—cements his stature as a theorist whose footprints underpin Extropic’s approach.

2. Michael Frank (Reversible Computing)
Frank is a prominent physicist in reversible and adiabatic computing, having led major advances at MIT, Sandia National Laboratories, and others. His research investigates how the physics of computation can reduce the fundamental energy cost—directly relevant to Extropic’s mission. Frank’s focus on low-energy information processing provides a conceptual environment for approaches like TSUs to flourish.

3. Chris Bishop & Yoshua Bengio (Probabilistic Machine Learning):
Leaders like Bishop and Bengio have shaped the field’s probabilistic foundations, advocating both for deep generative models and for the practical co-design of hardware and algorithms. Their research has stressed the need to reconcile statistical efficiency with computational tractability—a tension at the core of Extropic’s narrative.

4. Alan Turing & John von Neumann (Foundations of Computing):
While not direct contributors to modern machine learning, the legacies of Turing and von Neumann persist in every conversation about alternative architectures and the physical limits of computation. The post-von Neumann and post-Turing trajectory, with a return to analogue, stochastic, or sampling-based circuitry, is directly echoed in Extropic’s work.

5. Recent Industry Visionaries (e.g., Sam Altman, Jensen Huang):
Contemporary leaders in the AI infrastructure space—such as Altman of OpenAI and Huang of Nvidia—have articulated the scale required for AGI and the daunting reality of terawatt-scale compute. Their business strategies rely on the assumption that improved digital hardware will be sufficient, a view McCourt contests with data and physical models.

Strategic & Scientific Context for the Field

  • Core problem: The energy that powers AI is reaching non-linear scaling—mass-market AI could consume a significant fraction or even multiples of the entire global grid if naively scaled with today’s architectures.
  • Physics bottlenecks: Improvements in digital logic are limited by physical constants: capacitance, voltage, and the energy required for irreversible computation. Digital logic has plateaued at the 10nm node.
  • Algorithmic evolution: Traditional deep learning is rooted in deterministic matrix computations, but the true statistical nature of intelligence calls for sampling from complex distributions—as foregrounded in Hinton’s work and now implemented in Extropic’s TSUs.
  • Paradigm shift: McCourt and contemporaries argue for a transition to native hardware–software co-design where the core computational primitive is no longer the multiply–accumulate (MAC) operation, but energy-efficient probabilistic sampling.

Summary Insight

Trevor McCourt anchors his cautionary prognosis for AI’s future on rigorous cross-disciplinary insights—from physical hardware limits to probabilistic learning theory. By combining his own engineering prowess with the legacy of foundational theorists and contemporary thinkers, McCourt’s perspective is not simply one of warning but also one of opportunity: a new generation of probabilistic, thermodynamically-inspired computers could rewrite the energy economics of artificial intelligence, making “AI for everyone” plausible—without grid-scale insanity.

read more
Quote: Alex Karp – Palantir CEO

Quote: Alex Karp – Palantir CEO

“The idea that chips and ontology is what you want to short is batsh*t crazy.” – Alex Karp -Palantir CEO

Alex Karp, co-founder and CEO of Palantir Technologies, delivered the now widely-circulated statement, “The idea that chips and ontology is what you want to short is batsh*t crazy,” in response to famed investor Michael Burry’s high-profile short positions against both Palantir and Nvidia. This sharp retort came at a time when Palantir, an enterprise software and artificial intelligence (AI) powerhouse, had just reported record earnings and was under intense media scrutiny for its meteoric stock rise and valuation.

Context of the Quote

The remark was made in early November 2025 during a CNBC interview, following public disclosures that Michael Burry—of “The Big Short” fame—had taken massive short positions in Palantir and Nvidia, two companies at the heart of the AI revolution. Burry’s move, reminiscent of his contrarian bets during the 2008 financial crisis, was interpreted by the market as both a challenge to the soaring “AI trade” and a critique of the underlying economics fueling the sector’s explosive growth.

Karp’s frustration was palpable: not only was Palantir producing what he described as “anomalous” financial results—outpacing virtually all competitors in growth, cash flow, and customer retention—but it was also emerging as the backbone of data-driven operations across government and industry. For Karp, Burry’s short bet went beyond traditional market scepticism; it targeted firms, products (“chips” and “ontology”—the foundational hardware for AI and the architecture for structuring knowledge), and business models proven to be both technically indispensable and commercially robust. Karp’s rejection of the “short chips and ontology” thesis underscores his belief in the enduring centrality of the technologies underpinning the modern AI stack.

Backstory and Profile: Alex Karp

Alex Karp stands out as one of Silicon Valley’s true iconoclasts:

  • Background and Education: Born in New York City in 1967, Karp holds a philosophy degree from Haverford College, a JD from Stanford, and a PhD in social theory from Goethe University Frankfurt, where he studied under and wrote about the influential philosopher Jürgen Habermas. This rare academic pedigree—blending law, philosophy, and critical theory—deeply informs both his contrarian mindset and his focus on the societal impact of technology.
  • Professional Arc: Before founding Palantir in 2004 with Peter Thiel and others, Karp had forged a career in finance, running the London-based Caedmon Group. At Palantir, he crafted a unique culture and business model, combining a wellness-oriented, sometimes spiritual corporate environment with the hard-nosed delivery of mission-critical systems for Western security, defence, and industry.
  • Leadership and Philosophy: Karp is known for his outspoken, unconventional leadership. Unafraid to challenge both Silicon Valley’s libertarian ethos and what he views as the groupthink of academic and financial “expert” classes, he publicly identifies as progressive—yet separates himself from establishment politics, remaining both a supporter of the US military and a critic of mainstream left and right ideologies. His style is at once brash and philosophical, combining deep skepticism of market orthodoxy with a strong belief in the capacity of technology to deliver real-world, not just notional, value.
  • Palantir’s Rise: Under Karp, Palantir grew from a niche contractor to one of the world’s most important data analytics and AI companies. Palantir’s products are deeply embedded in national security, commercial analytics, and industrial operations, making the company essential infrastructure in the rapidly evolving AI economy.

Theoretical Background: ‘Chips’ and ‘Ontology’

Karp’s phrase pairs two of the foundational concepts in modern AI and data-driven enterprise:

  • Chips: Here, “chips” refers specifically to advanced semiconductors (such as Nvidia’s GPUs) that provide the computational horsepower essential for training and deploying cutting-edge machine learning models. The AI revolution is inseparable from advances in chip design, leading to historic demand for high-performance hardware.
  • Ontology: In computer and information science, “ontology” describes the formal structuring and categorising of knowledge—making data comprehensible, searchable, and actionable by algorithms. Robust ontologies enable organisations to unify disparate data sources, automate analytical reasoning, and achieve the “second order” efficiencies of AI at scale.

Leading theorists in the domain of ontology and AI include:

  • John McCarthy: A founder of artificial intelligence, McCarthy’s foundational work on formal logic and semantics laid groundwork for modern ontological structures in AI.
  • Tim Berners-Lee: Creator of the World Wide Web, Berners-Lee developed the Semantic Web, championing knowledge structuring via ontologies—enabling data to be machine-readable and all but indispensable for AI’s next leap.
  • Thomas Gruber: Known for his widely cited definition of ontology in AI as “a specification of a conceptualisation,” Gruber’s research shaped the field’s approach to standardising knowledge representations for complex applications.

In the chip space, the pioneering work of:

  • Jensen Huang: CEO and co-founder of Nvidia, drove the company’s transformation from graphics to AI acceleration, cementing the centrality of chips as the hardware substrate for everything from generative AI to advanced analytics.
  • Gordon Moore and Robert Noyce: Their early explorations in semiconductor fabrication set the stage for the exponential hardware progress that enabled the modern AI era.

Insightful Context for the Modern Market Debate

The “chips and ontology” remark reflects a deep divide in contemporary technology investing:

  • On one side, sceptics like Burry see signs of speculative excess, reminiscent of prior bubbles, and bet against companies with high valuations—even when those companies dominate core technologies fundamental to AI.
  • On the other, leaders like Karp argue that while the broad “AI trade” risks pockets of overvaluation, the engine—the computational hardware (chips) and data-structuring logic (ontology)—are not just durable, but irreplaceable in the digital economy.

With Palantir and Nvidia at the centre of the current AI-driven transformation, Karp’s comment captures not just a rebuttal to market short-termism, but a broader endorsement of the foundational technologies that define the coming decade. The value of “chips and ontology” is, in Karp’s eyes, anchored not in market narrative but in empirical results and business necessity—a perspective rooted in a unique synthesis of philosophy, technology, and radical pragmatism.

read more
Quote: David Solomon – Goldman Sachs CEO

Quote: David Solomon – Goldman Sachs CEO

“Generally speaking people hate change. It’s human nature. But change is super important. It’s inevitable. In fact, on my desk in my office I have a little plaque that says ‘Change or die.’ As a business leader, one of the perspectives you have to have is that you’ve got to constantly evolve and change.” – David Solomon – Goldman Sachs CEO

The quoted insight comes from David M. Solomon, Chief Executive Officer and Chairman of Goldman Sachs, a role he has held since 2018. It was delivered during a high-profile interview at The Economic Club of Washington, D.C., 30 October 2025, as Solomon reflected on the necessity of adaptability both personally and as a leader within a globally significant financial institution.

“We have very smart people, and we can put these [AI] tools in their hands to make them more productive… By using AI to reimagine processes, we can create operating efficiencies that give us a scaled opportunity to reinvest in growth.” – David Solomon – Goldman Sachs CEO

David Solomon, Chairman and CEO of Goldman Sachs, delivered the quoted remarks during an interview at the HKMA Global Financial Leaders’ Investment Summit on 4 November 2025, articulating Goldman’s strategic approach to integrating artificial intelligence across its global franchise. His comments reflect both personal experience and institutional direction: leveraging new technology to drive productivity, reimagine workflows, and reinvest operational gains in sustainable growth, rather than pursuing simplistic headcount reductions or technological novelty for its own sake.

Backstory and Context of the Quote

David Solomon’s statement arises from Goldman Sachs’ current transformation—“Goldman Sachs 3.0”—centred on AI-driven process re-engineering. Rather than employing AI simply as a cost-cutting device, Solomon underscores its strategic role as an enabler for “very smart people” to magnify their productivity and impact. This perspective draws on his forty-year career in finance, where successive waves of technological disruption (from Lotus 1-2-3 spreadsheets to cloud computing) have consistently shifted how talent is leveraged, but have not diminished its central value.

The immediate business context is one of intense change: regulatory uncertainty in cross-border transactions, rebounding capital flows into China post-geopolitical tension, and a high backlog of M&A activity, particularly for large-cap US transactions. In this environment, efficiency gains from AI allow frontline teams to refocus on advisory, origination, and growth while adjusting operational models at a rapid pace. Solomon’s leadership style—pragmatic, unsentimental, and data-driven—favours process optimisation, open collaboration, and the breakdown of legacy silos.

About David Solomon

Background:

  • Born in Hartsdale, New York, in 1962; educated at Hamilton College with a BA in political science, then entered banking.
  • Career progression: Held senior roles at Irving Trust, Drexel Burnham, Bear Stearns; joined Goldman Sachs in 1999 as partner, eventually leading the Financing Group and serving as co-head of the Investment Banking Division for a decade.
  • Appointed President and COO in 2017, then CEO in October 2018 and Chairman in January 2019, succeeding Lloyd Blankfein.
  • Brought a reputation for transformative leadership, advocating modernisation, flattening hierarchies, and integrating technology across every aspect of the firm’s operations.

Leadership and Culture:

  • Solomon is credited with pushing through “One Goldman Sachs,” breaking down internal silos and incentivising cross-disciplinary collaboration.
  • He has modernised core HR and management practices: implemented real-time performance reviews, loosened dress codes, and raised compensation for programmers.
  • Personal interests—such as his sideline as DJ D-Sol—underscore his willingness to defy convention and challenge the insularity of Wall Street leadership.

Institutional Impact:

  • Under his stewardship, Goldman has accelerated its pivot to technology—automating trading operations, consolidating platforms, and committing substantial resources to digital transformation.
  • Notably, the current “GS 3.0” agenda focuses on automating six major workflows to direct freed capacity into growth, consistent with a multi-decade productivity trend.

Leading Theorists and Intellectual Lineage of AI-Driven Productivity in Business

Solomon’s vision is shaped and echoed by several foundational theorists in economics, management science, and artificial intelligence:

1. Clayton Christensen

  • Theory: Disruptive Innovation—frames how technological change transforms industries not through substitution but by enabling new business models and process efficiencies.
  • Relevance: Goldman Sachs’ approach to using AI to reimagine workflows and create new capabilities closely mirrors Christensen’s insights on sustaining versus disruptive innovation.

2. Erik Brynjolfsson & Andrew McAfee

  • Theory: Race Against the Machine, The Second Machine Age—chronicled how digital automation augments human productivity and reconfigures the labour market, not just replacing jobs but reshaping roles and enhancing output.
  • Relevance: Solomon’s argument for enabling smart people with better tools directly draws on Brynjolfsson’s proposition that the best organisational outcomes occur when firms successfully combine human and machine intelligence.

3. Michael Porter

  • Theory: Competitive Advantage—emphasised how operational efficiency and information advantage underpin sustained industry leadership.
  • Relevance: Porter’s ideas connect to Goldman’s agenda by showing that AI integration is not just about cost, but about improving information processing, strategic agility, and client service.

4. Herbert Simon

  • Theory: Bounded Rationality and Decision Support Systems—pioneered the concept that decision-making can be dramatically improved by systems that extend the cognitive capabilities of professionals.
  • Relevance: Solomon’s claim that AI puts better tools in the hands of talented staff traces its lineage to Simon’s vision of computers as skilled assistants, vital to complex modern organisations.

5. Geoffrey Hinton, Yann LeCun, Yoshua Bengio

  • Theory: Deep Learning—established the contemporary AI revolution underpinning business process automation, language models, and data analysis at enterprise scale.
  • Relevance: Without the breakthroughs made by these theorists, AI’s current generation—capable of augmenting financial analysis, risk modelling, and operational management—could not be applied as Solomon describes.

 

Synthesis and Strategic Implications

Solomon’s quote epitomises the intersection of pragmatic executive leadership and theoretical insight. His advocacy for AI-integrated productivity reinforces a management consensus: sustainable competitive advantage hinges not just on technology, but on empowering skilled individuals to unlock new modes of value creation. This approach is echoed by leading researchers who situate automation as a catalyst for role evolution, scalable efficiency, and the ability to redeploy resources into higher-value growth opportunities.

Goldman Sachs’ specific AI play is therefore neither a defensive move against headcount nor a speculative technological bet, but a calculated strategy rooted in both practical business history and contemporary academic theory—a paradigm for how large organisations can adapt, thrive, and lead in the face of continual disruption.

read more
Quote: Satya Nadella – Microsoft CEO

Quote: Satya Nadella – Microsoft CEO

“At scale, nothing is a commodity. We have to have our cost structure, supply-chain efficiency, and software efficiencies continue to compound to ensure margins. Scale – and one of the things I love about the OpenAI partnership – is it’s gotten us to scale. This is a scale game.” – Satya Nadella – Microsoft CEO

Satya Nadella has been at the helm of Microsoft since 2014, overseeing its transformation into one of the world’s most valuable technology companies. Born in Hyderabad, India, and educated in electrical engineering and computer science, Nadella joined Microsoft in 1992, quickly rising through the ranks in technical and business leadership roles. Prior to becoming CEO, he was best known for driving the rapid growth of Microsoft Azure, the company’s cloud infrastructure platform—a business now central to Microsoft’s global strategy.

Nadella’s leadership style is marked by systemic change—he has shifted Microsoft away from legacy, siloed software businesses and repositioned it as a cloud-first, AI-driven, and highly collaborative tech company. He is recognised for his ability to anticipate secular shifts—most notably, the move to hyperscale cloud computing and, more recently, the integration of advanced AI into core products such as GitHub Copilot and Microsoft 365 Copilot. His background—combining deep technical expertise with rigorous business training (MBA, University of Chicago)—enables him to bridge both the strategic and operational dimensions of global technology.

This quote was delivered in the context of Nadella’s public discussion on the scale economics of AI, hyperscale cloud, and the transformative partnership between Microsoft and OpenAI (the company behind ChatGPT, Sora, and GPT-4/5/6) on the BG2 podcast, 1st November 2025 In this conversation, Nadella outlines why, at the extreme end of global tech infrastructure, nothing remains a “commodity”: system costs, supply chain and manufacturing agility, and relentless software optimisation all become decisive sources of competitive advantage. He argues that scale—meaning not just size, but the compounding organisational learning and cost improvement unlocked by operating at frontier levels—determines who captures sustainable margins and market leadership.

The OpenAI partnership is, from Nadella’s perspective, a practical illustration of this thesis. By integrating OpenAI’s frontier models deeply (and at exclusive scale) within Azure, Microsoft has driven exponential increases in compute utilisation, data flows, and the learning rate of its software infrastructure. This allowed Microsoft to amortise fixed investments, rapidly reduce unit costs, and create a loop of innovation not accessible to smaller or less integrated competitors. In Nadella’s framing, scale is not a static achievement, but a perpetual game—one where the winners are those who compound advantages across the entire stack: from chip supply chains through to application software and business model design.

Theoretical Foundations and Key Thinkers

The quote’s themes intersect with multiple domains: economics of platforms, organisational learning, network effects, and innovation theory. Key theoretical underpinnings and thinkers include:

Scale Economics and Competitive Advantage

  • Alfred Chandler (1918–2007): Chandler’s work on the “visible hand” and the scale and scope of modern industrial firms remains foundational. He showed how scale, when coupled with managerial coordination, allows firms to achieve durable cost advantages and vertical integration.
  • Bruce Greenwald & Judd Kahn: In Competition Demystified (2005), they argue sustainable competitive advantage stems from barriers to entry—often reinforced by scale, especially via learning curves, supply chains, and distribution.

Network Effects and Platform Strategy

  • Jean Tirole & Marcel Boyer: Tirole’s work on platform economics shows how scale-dependent markets (like cloud and AI) naturally concentrate—network effects reinforce the value of leading platforms, and marginal cost advantage compounds alongside user and data scale.
  • Geoffrey Parker, Marshall Van Alstyne, Sangeet Paul Choudary: In their research and Platform Revolution, these thinkers elaborate how the value in digital markets accrues disproportionately to platforms that achieve scale—because transaction flows, learning, and innovation all reinforce one another.

Learning Curves and Experience Effects

  • The Boston Consulting Group (BCG): In the 1960s, Bruce Henderson’s concept of the “experience curve” formalised the insight that unit costs fall as cumulative output grows—the canonical explanation for why scale delivers persistent cost advantage.
  • Clayton Christensen: In The Innovator’s Dilemma, Christensen illustrates how technological discontinuities and learning rates enable new entrants to upend incumbent advantage—unless those incumbents achieve scale in the new paradigm.

Supply Chain and Operations

  • Taiichi Ohno and Shoichiro Toyoda (Toyota Production System): The industrial logic that relentless supply chain optimisation and compounding process improvements, rather than static cost reduction, underpin long-run advantage, especially during periods of rapid demand growth or supply constraint.

Economics of Cloud and AI

  • Hal Varian (Google, UC Berkeley): Varian’s analyses of cloud economics demonstrate the massive fixed-cost base and “public utility” logic of hyperscalers. He has argued that AI and cloud converge when scale enables learning (data/usage) to drive further cost and performance improvements.
  • Andrew Ng, Yann LeCun, Geoffrey Hinton: Pioneer practitioners in deep learning and large language models, whose work established the “scaling laws” now driving the AI infrastructure buildout—i.e., that model capability increases monotonically with scale of data, compute, and parameter count.

Why This Matters Now

Organisations at the digital frontier—notably Microsoft and OpenAI—are now locked in a scale game that is reshaping both industry structure and the global economy. The cost, complexity, and learning rate needed to operate at hyperscale mean that “commodities” (compute, storage, even software itself) cease to be generic. Instead, they become deeply differentiated by embedded knowledge, utilisation efficiency, supply-chain integration, and the ability to orchestrate investments across cycles of innovation.

Nadella’s observation underscores a reality that now applies well beyond technology: the compounding of competitive advantage at scale has become the critical determinant of sector leadership and value capture. This logic is transforming industries as diverse as finance, logistics, pharmaceuticals, and manufacturing—where the ability to build, learn, and optimise at scale fundamentally redefines what was once considered “commodity” business.

In summary: Satya Nadella’s words reflect not only Microsoft’s strategy but a broader economic and technological transformation, deeply rooted in the theory and practice of scale, network effects, and organisational learning. Theorists and practitioners—from Chandler and BCG to Christensen and Varian—have analysed these effects for decades, but the age of AI and cloud has made their insights more decisive than ever. At the heart of it: scale—properly understood and operationalised—remains the ultimate competitive lever.

read more
Quote: David Solomon – Goldman Sachs CEO

Quote: David Solomon – Goldman Sachs CEO

“Generally speaking people hate change. It’s human nature. But change is super important. It’s inevitable. In fact, on my desk in my office I have a little plaque that says ‘Change or die.’ As a business leader, one of the perspectives you have to have is that you’ve got to constantly evolve and change.” – David Solomon – Goldman Sachs CEO

The quoted insight comes from David M. Solomon, Chief Executive Officer and Chairman of Goldman Sachs, a role he has held since 2018. It was delivered during a high-profile interview at The Economic Club of Washington, D.C., 30 October 2025, as Solomon reflected on the necessity of adaptability both personally and as a leader within a globally significant financial institution.

His statement is emblematic of the strategic philosophy that has defined Solomon’s executive tenure. He uses the ‘Change or die’ principle to highlight the existential imperative for renewal in business, particularly in the context of technological transformation, competitive dynamics, and economic disruption.

Solomon’s leadership at Goldman Sachs has been characterised by deliberate modernisation. He has overseen the integration of advanced technology, notably in artificial intelligence and fintech, implemented culture and process reforms, adapted workforce practices, and expanded strategic initiatives in sustainable finance. His approach blends operational rigour with entrepreneurial responsiveness – a mindset shaped both by his formative years in high-yield credit markets at Drexel Burnham and Bear Stearns, and by his rise through leadership roles at Goldman Sachs.

His remark on change was prompted by questions of business resilience and the need for constant adaptation amidst macroeconomic uncertainty, regulatory flux, and the competitive imperatives of Wall Street. For Solomon, resisting change is an instinct, but enabling it is a necessity for long-term health and relevance — especially for institutions in rapidly converging markets.

About David M. Solomon

  • Born 1962, Hartsdale, New York.
  • Hamilton College graduate (BA Political Science).
  • Early career: Irving Trust, Drexel Burnham, Bear Stearns.
  • Joined Goldman Sachs as a partner in 1999, advancing through financing and investment banking leadership.
  • CEO from October 2018, Chairman from January 2019.
  • Known for a modernisation agenda, openness to innovation and talent, commitment to client service and culture reform.
  • Outside finance: Philanthropy, board service, and a second career as electronic dance music DJ “DJ D-Sol”, underscoring a multifaceted approach to leadership and personal renewal.

Theoretical Backstory: Leading Thinkers on Change and Organisational Adaptation

Solomon’s philosophy echoes decades of foundational theory in business strategy and organisational behaviour:

Charles Darwin (1809–1882)
While not a business theorist, Darwin’s principle of “survival of the fittest” is often cited in strategic literature to emphasise the adaptive imperative — those best equipped to change, survive.

Peter Drucker (1909–2005)
Drucker, regarded as the father of modern management, wrote extensively on innovation, entrepreneurial management and the need for “planned abandonment.” He argued, “The greatest danger in times of turbulence is not the turbulence; it is to act with yesterday’s logic.” Drucker’s legacy forms a pillar of contemporary change management, advising leaders not only to anticipate change but to institutionalise it.

John Kotter (b. 1947)
Kotter’s model for Leading Change remains a classic in change management. His eight-step framework starts with establishing a sense of urgency and is grounded in the idea that successful transformation is both necessary and achievable only with decisive leadership, clear vision, and broad engagement. Kotter demonstrated that people’s resistance to change is natural, but can be overcome through structured actions and emotionally resonant leadership.

Clayton Christensen (1952-2020)
Christensen’s work on disruptive innovation clarified how incumbents often fail by ignoring, dismissing, or underinvesting in change — even when it is inevitable. His concept of the “Innovator’s Dilemma” remains seminal, showing that leaders must embrace change not as an abstract imperative but as a strategic necessity, lest they be replaced or rendered obsolete.

Rosabeth Moss Kanter
Kanter’s work focuses on the human dynamics of change, the importance of culture, empowerment, and the “innovation habit” in organisations. She holds that the secret to business success is “constant, relentless innovation” and that resistance to change is deeply psychological, calling for leaders to engineer positive environments for innovation.

Integration: The Leadership Challenge

Solomon’s ethos channels these frameworks into practical executive guidance. For business leaders, particularly in financial services and Fortune 500 firms, the lesson is clear: inertia is lethal; organisational health depends on reimagining processes, culture, and client engagement for tomorrow’s challenges. The psychological aversion to change must be managed actively at all levels — from the boardroom to the front line.

In summary, the context of Solomon’s quote reflects not only a personal credo but also the consensus of generations of theoretical and practical leadership: only those prepared to “change or die” can expect to thrive and endure in an era defined by speed, disruption, and relentless unpredictability.

read more
Quote: Andrej Karpathy – Ex-OpenAI, Ex-Tesla AI

Quote: Andrej Karpathy – Ex-OpenAI, Ex-Tesla AI

“[With AI] we’re not building animals. We’re building ghosts or spirits.” – Andrej Karpathy – Ex-OpenAI, Ex-Tesla AI

Andrej Karpathy, renowned for his leadership roles at OpenAI and Tesla’s Autopilot programme, has been at the centre of advances in deep learning, neural networks, and applied artificial intelligence. His work traverses both academic research and industrial deployment, granting him a panoramic perspective on the state and direction of AI.

When Karpathy refers to building “ghosts or spirits,” he is drawing a conceptual line between biological intelligence—the product of millions of years of evolution—and artificial intelligence as developed through data-driven, digital systems. In his view, animals are “baked in” with instincts, embodiment, and innate learning capacities shaped by evolution, a process unfolding over geological timeframes. By contrast, today’s AI models are “ghosts” in the sense that they are ethereal, fully digital artefacts, trained to imitate human-generated data rather than to evolve or learn through direct interaction with the physical world. They lack bodily instincts and the evolutionary substrate that endows animals with survival strategies and adaptation mechanisms.

Karpathy describes the pre-training process that underpins large language models as a form of “crappy evolution”—a shortcut that builds digital entities by absorbing the statistical patterns of internet-scale data without the iterative adaptation of embodied beings. Consequently, these models are not “born” into the world like animals with built-in survival machinery; instead, they are bootstrapped as “ghosts,” imitating but not experiencing life.

 

The Cognitive Core—Karpathy’s Vision for AI Intelligence

Karpathy’s thinking has advanced towards the critical notion of the “cognitive core”: the kernel of intelligence responsible for reasoning, abstraction, and problem-solving, abstracted away from encyclopaedic factual knowledge. He argues that the true magic of intelligence is not in the passive recall of data, but in the flexible, generalisable ability to manipulate ideas, solve problems, and intuit patterns—capabilities that a system exhibits even when deprived of pre-programmed facts or exhaustive memory.

He warns against confusing memorisation (the stockpiling of internet facts within a model) with general intelligence, which arises from this cognitive core. The most promising path, in his view, is to isolate and refine this core, stripping away the accretions of memorised data, thereby developing something akin to a “ghost” of reasoning and abstraction rather than an “animal” shaped by instinct and inheritance.

This approach entails significant trade-offs: a cognitive core lacks the encyclopaedic reach of today’s massive models, but gains in adaptability, transparency, and the capacity for compositional, creative thought. By foregrounding reasoning machinery, Karpathy posits that AI can begin to mirror not the inflexibility of animals, but the open-ended, reflective qualities that characterise high-level problem-solving.

 

Karpathy’s Journey and Influence

Karpathy’s influence is rooted in a career spent on the frontier of AI research and deployment. His early proximity to Geoffrey Hinton at the University of Toronto placed him at the launch-point of the convolutional neural networks revolution, which fundamentally reshaped computer vision and pattern recognition.

At OpenAI, Karpathy contributed to an early focus on training agents to master digital environments (such as Atari games), a direction in retrospect he now considers premature. He found greater promise in systems that could interact with the digital world through knowledge work—precursors to today’s agentic models—a vision he is now helping to realise through ongoing work in educational technology and AI deployment.

Later, at Tesla, he directed the transformation of autonomous vehicles from demonstration to product, gaining hard-won appreciation for the “march of nines”—the reality that progressing from system prototypes that work 90% of the time to those that work 99.999% of the time requires exponentially more effort. This experience informs his scepticism towards aggressive timelines for “AGI” and his insistence on the qualitative differences between robust system deployment and controlled demonstrations.

 

The Leading Theorists Shaping the Debate

Karpathy’s conceptual framework emerges amid vibrant discourse within the AI community, shaped by several seminal thinkers:

Theorist
Core Idea
Relation to Karpathy’s Ghosts vs. Animals Analogy
Richard Sutton
General intelligence emerges through learning algorithms honed by evolution (“bitter lesson”)
Sutton advocates building “animals” via RL and continual learning; Karpathy sees modern AI as ghosts—data-trained, not evolved.
Geoffrey Hinton
Neural networks model learning and perception as statistical pattern discovery
Hinton’s legacy underpins the digital cortex, but Karpathy stresses what’s missing: embodied instincts, continual memory.
Yann LeCun
Convolutional neural networks and representation learning for perceptual tasks
LeCun’s work forms part of the “cortex”, but Karpathy highlights the missing brain structures and instincts for full generality.

Sutton’s “bitter lesson” posits that scale and generic algorithms, rather than domain-specific tricks, ultimately win—suggesting a focus on evolving animal-like intelligence. Karpathy, however, notes that current development practices, with their reliance on dataset imitation, sidestep the deep embodiment and evolutionary learning that define animal cognition. Instead, AI today creates digital ghosts—entities whose minds are not grounded in physical reality, but in the manifold of internet text and data.

Hinton and LeCun supply the neural and architectural foundations—the “cortex” and reasoning traces—while both Karpathy and their critics note the absence of rich, consolidated memory (the hippocampus analogue), instincts (amygdala), and the capacity for continual, self-motivated world interaction.

Why “Ghosts,” Not “Animals”?

The distinction is not simply philosophical. It carries direct consequences for:

  • Capabilities: AI “ghosts” excel at pattern reproduction, simulation, and surface reasoning but lack the embodied, instinctual grounding (spatial navigation, sensorimotor learning) of animals.
  • Limitations: They are subject to model collapse, producing uniform, repetitive outputs, lacking the spontaneous creativity and entropy seen in human (particularly child) cognition.
  • Future Directions: The field is now oriented towards distilling this cognitive core, seeking a scalable, adaptable reasoning engine—compact, efficient, and resilient to overfitting—rather than continuing to bloat models with ever more static memory.

This lens sharpens expectations: the way forward is not to mimic biology in its totality, but to pursue the unique strengths and affordances of a digital, disembodied intelligence—a spirit of the datasphere, not a beast evolved in the forest.

 

Broader Significance

Karpathy’s “ghosts” metaphor crystallises a critical moment in the evolution of AI as a discipline. It signals a turning point: the shift from brute-force memorisation of the internet to intelligent, creative algorithms capable of abstraction, reasoning, and adaptation.

This reframing is shaping not only the strategic priorities of the most advanced labs, but also the philosophical and practical questions underpinning the next decade of AI research and deployment. As AI becomes increasingly present in society, understanding its nature—not as an artificial animal, but as a digital ghost—will be essential to harnessing its strengths and mitigating its limitations.

read more
Quote: Sholto Douglas – Anthropic

Quote: Sholto Douglas – Anthropic

“People have said we’re hitting a plateau every month for three years… I look at how models are produced and every part could be improved. The training pipeline is primitive, held together by duct tape, best efforts, and late nights. There’s so much room to grow everywhere.” – Sholto Douglas – Anthropic

Sholto Douglas made the statement during a major public podcast interview in October 2025, coinciding with Anthropic’s release of Claude Sonnet 4.5—at the time, the world’s strongest and most “agentic” AI coding model. The comment specifically rebuts repeated industry and media assertions that large AI models have reached a ceiling or are slowing in progress. Douglas argues the opposite: that the field is in a phase of accelerating advancement, driven both by transformative hardware investment (“compute super-cycle”), new algorithmic techniques (particularly reinforcement learning and test-time compute), and the persistent “primitive” state of today’s AI engineering infrastructure.

He draws an analogy with early-stage, improvisational systems: the models are held together “by duct tape, best efforts, and late nights,” making clear that immense headroom for improvement remains at every level, from training data pipelines and distributed infrastructure to model architecture and reward design. As a result, every new benchmark and capability reveals further unrealised opportunity, with measurable progress charted month after month.

Douglas’s deeper implication is that claims of a plateau often arise from surface-level analysis or the “saturation” of public benchmarks, not from a rigorous understanding of what is technically possible or how much scale remains untapped across the technical stack.

Sholto Douglas: Career Trajectory and Perspective

Sholto Douglas is a leading member of Anthropic’s technical staff, focused on scaling reinforcement learning and agentic AI. His unconventional journey illustrates both the new talent paradigm and the nature of breakthrough AI research today:

  • Early Life and Mentorship: Douglas grew up in Australia, where he benefited from unusually strong academic and athletic mentorship. His mother, an accomplished physician frustrated by systemic barriers, instilled discipline and a systemic approach; his Olympic-level fencing coach provided a first-hand experience of how repeated, directed effort leads to world-class performance.
  • Academic Formation: He studied computer science and robotics as an undergraduate, with a focus on practical experimentation and a global mindset. A turning point was reading the “scaling hypothesis” for AGI, convincing him that progress on artificial general intelligence was feasible within a decade—and worth devoting his career to.
  • Independent Innovation: As a student, Douglas built “bedroom-scale” foundation models for robotics, working independently on large-scale data collection, simulation, and early adoption of transformer-based methods. This entrepreneurial approach—demonstrating initiative and technical depth without formal institutional backing—proved decisive.
  • Google (Gemini and DeepMind): His independent work brought him to Google, where he joined just before the release of ChatGPT, in time to witness and help drive the rapid unification and acceleration of Google’s AI efforts (Gemini, Brain, DeepMind). He co-designed new inference infrastructure that reduced costs and worked at the intersection of large-scale learning, reinforcement learning, and applied reasoning.
  • Anthropic (from 2025): Drawn by Anthropic’s focus on measurable, near-term economic impact and deep alignment work, Douglas joined to lead and scale reinforcement learning research—helping push the capability frontier for agentic models. He values a culture where every contributor understands and can articulate how their work advances both capability and safety in AI.

Douglas is distinctive for his advocacy of “taste” in AI research, favouring mechanistic understanding and simplicity over clever domain-specific tricks—a direct homage to Richard Sutton’s “bitter lesson.” This perspective shapes his belief that the greatest advances will come not from hiding complexity with hand-crafted heuristics, but from scaling general algorithms and rigorous feedback loops.

 

Intellectual and Scientific Context: The ‘Plateau’ Debate and Leading Theorists

The debate around the so-called “AI plateau” is best understood against the backdrop of core advances and recurring philosophical arguments in machine learning.

The “Bitter Lesson” and Richard Sutton

  • Richard Sutton (University of Alberta, DeepMind), one of the founding figures in reinforcement learning, crystallised the field’s “bitter lesson”: that general, scalable methods powered by increased compute will eventually outperform more elegant, hand-crafted, domain-specific approaches.
  • In practical terms, this means that the field’s recent leaps—from vision to language to coding—are powered less by clever new inductive biases, and more by architectural simplicity plus massive compute and data. Sutton has also maintained that real progress in AI will come from reinforcement learning with minimal task-specific assumptions and maximal data, computation, and feedback.

Yann LeCun and Alternative Paradigms

  • Yann LeCun (Meta, NYU), a pioneer of deep learning, has maintained that the transformer paradigm is limited and that fundamentally novel architectures are necessary for human-like reasoning and autonomy. He argues that unsupervised/self-supervised learning and new world-modelling approaches will be required.
  • LeCun’s disagreement with Sutton’s “bitter lesson” centres on the claim that scaling is not the final answer: new representation learning, memory, and planning mechanisms will be needed to reach AGI.

Shane Legg, Demis Hassabis, and DeepMind

  • DeepMind’s approach has historically been “science-first,” tackling a broad swathe of human intelligence challenges (AlphaGo, AlphaFold, science AI), promoting a research culture that takes long-horizon bets on new architectures (memory-augmented neural networks, world models, differentiable reasoning).
  • Demis Hassabis and Shane Legg (DeepMind co-founders) have advocated for testing a diversity of approaches, believing that the path to AGI is not yet clear—though they too acknowledge the value of massive scale and reinforcement learning.

The Scaling Hypothesis: GW’s Essay and the Modern Era

  • The so-called “scaling hypothesis”—the idea that simply making models larger and providing more compute and data will continue yielding improvements—has become the default “bet” for Anthropic, OpenAI, and others. Douglas refers directly to this intellectual lineage as the critical “hinge” moment that set his trajectory.
  • This hypothesis is now being extended into new areas, including agentic systems where long context, verification, memory, and reinforcement learning allow models to reliably pursue complex, multi-step goals semi-autonomously.
 

Summing Up: The Current Frontier

Today, researchers like Douglas are moving beyond the original transformer pre-training paradigm, leveraging multi-axis scaling (pre-training, RL, test-time compute), richer reward systems, and continuous experimentation to drive model capabilities in coding, digital productivity, and emerging physical domains (robotics and manipulation).

Douglas’s quote epitomises the view that not only has performance not plateaued—every “limitation” encountered is a signpost for further exponential improvement. The modest, “patchwork” nature of current AI infrastructure is a competitive advantage: it means there is vast room for optimisation, iteration, and compounding gains in capability.

As the field races into a new era of agentic AI and economic impact, his perspective serves as a grounded, inside-out refutation of technological pessimism and a call to action grounded in both technical understanding and relentless ambition.

read more
Quote: Julian Schrittwieser – Anthropic

Quote: Julian Schrittwieser – Anthropic

“The talk about AI bubbles seemed very divorced from what was happening in frontier labs and what we were seeing. We are not seeing any slowdown of progress.” – Julian Schrittwieser – Anthropic

Those closest to technical breakthroughs are witnessing a pattern of sustained, compounding advancement that is often underestimated by commentators and investors. This perspective underscores both the power and limitations of conventional intuitions regarding exponential technological progress.

 

Context of the Quote

Schrittwieser delivered these remarks in a 2025 interview on the MAD Podcast, prompted by widespread discourse on the so-called ‘AI bubble’. His key contention is that debate around an AI investment or hype “bubble” feels disconnected from the lived reality inside the world’s top research labs, where the practical pace of innovation remains brisk and outwardly undiminished. He outlines that, according to direct observation and internal benchmarks at labs such as Anthropic, progress remains on a highly consistent exponential curve: “every three to four months, the model is able to do a task that is twice as long as before completely on its own”.

He draws an analogy to the early days of COVID-19, where exponential growth was invisible until it became overwhelming; the same mathematical processes, Schrittwieser contends, apply to AI system capabilities. While public narratives about bubbles often reference the dot-com era, he highlights a bifurcation: frontier labs sustain robust, revenue-generating trajectories, while the wider AI ecosystem might experience bubble-like effects in valuation. But at the core—the technology itself continues to improve at a predictably exponential rate well supported by both qualitative experience and benchmark data.

Schrittwieser’s view, rooted in immediate, operational knowledge, is that the default expectation of a linear future is mistaken: advances in autonomy, reasoning, and productivity are compounding. This means genuinely transformative impacts—such as AI agents that function at expert level or beyond for extended, unsupervised tasks—are poised to arrive sooner than many anticipate.

 

Profile: Julian Schrittwieser

Julian Schrittwieser is one of the world’s leading artificial intelligence researchers, currently based at Anthropic, following a decade as a core scientist at Google DeepMind. Raised in rural Austria, Schrittwieser’s journey from an adolescent fascination with game programming to the vanguard of AI research exemplifies the discipline’s blend of curiosity, mathematical rigour, and engineering prowess. He studied computer science at the Vienna University of Technology, before interning at Google.

Schrittwieser was a central contributor to several historic machine learning milestones, most notably:

 
  • AlphaGo, the first program to defeat a world champion at Go, combining deep neural networks with Monte Carlo Tree Search.
  • AlphaGo Zero and AlphaZero, which generalised the approach to achieve superhuman performance without human examples, through self-play—demonstrating true generality in reinforcement learning.
  • MuZero (as lead author), solving the challenge of mastering environments without even knowing the rules in advance, by enabling the system to learn its own internal, predictive world models—an innovation bringing RL closer to complex, real-world domains.
  • Later work includes AlphaCode (code synthesis), AlphaTensor (algorithmic discovery), and applied advances in Gemini and AlphaProof.

At Anthropic, Schrittwieser is at the frontier of research into scaling laws, reinforcement learning, autonomous agents, and novel techniques for alignment and safety in next-generation AI. True to his pragmatic ethos, he prioritises what directly raises capability and reliability, and advocates for careful, data-led extrapolation rather than speculation.

 

Theoretical Backstory: Exponential AI Progress and Key Thinkers

Schrittwieser’s remarks situate him within a tradition of AI theorists and builders focused on scaling laws, reinforcement learning (RL), and emergent capabilities:

Leading Theorists and Historical Perspective

Name
Notable Ideas and Contributions
Relevance to Quote
Demis Hassabis
Founder of DeepMind; architect of the AlphaGo programme. Emphasised general intelligence and the power of RL plus planning.
Schrittwieser’s mentor and DeepMind leader. Pioneered RL paradigms beyond games.
David Silver
Developed many of the breakthroughs underlying AlphaGo, AlphaZero, MuZero. Advanced RL and model-based search methods.
Collaborator with Schrittwieser; together, demonstrated practical scaling of RL.
Richard Sutton
Articulated reinforcement learning’s centrality: “The Bitter Lesson” (general methods, scalable computation, not handcrafted). Advanced temporal difference methods and RL theory.
Mentioned by Schrittwieser as a thought leader shaping the RL paradigm at scale.
Alex Ray, Jared Kaplan, Sam McCandlish, OpenAI Scaling Team
Quantified AI’s “scaling laws”: empirical tendencies for model performance to improve smoothly with compute, data, and parameter scaling.
Schrittwieser echoes this data-driven, incrementalist philosophy.
Ilya Sutskever
Co-founder of OpenAI; central to deep learning breakthroughs, scaling, and forecasting emergent capabilities.
OpenAI’s work on benchmarks (GDP-Val) and scaling echoes these insights.

These thinkers converge on several key observations directly reflected in Schrittwieser’s view:

  • Exponential Capability Curves: Consistent advances in performance often surprise those outside the labs due to our poor intuitive grasp of exponentiality—what Schrittwieser terms a repeated “failure to understand the exponential”.
  • Scaling Laws and Reinforcement Learning: Improvements are not just about larger models, but ever-better training, more reliable reinforcement learning, agentic architecture, and robust reward systems—developments Schrittwieser’s work epitomises.
  • Novelty and Emergence: Historically, theorists doubted whether neural models could go beyond sophisticated mimicry; the “Move 37” moment (AlphaGo’s unprecedented move in Go) was a touchstone for true machine creativity, a theme Schrittwieser stresses remains highly relevant today.
  • Bubbles, Productivity, and Market Cycles: Mainstream financial and social narratives may oscillate dramatically, but real capability growth—observable in benchmarks and direct use—has historically marched on undeterred by speculative excesses.
 

Synthesis: Why the Perspective Matters

The quote foregrounds a gap between external perceptions and insider realities. Pioneers like Schrittwieser and his cohort stress that transformative change will not follow a smooth, linear or hype-driven curve, but an exponential, data-backed progression—one that may defy conventional intuition, but is already reshaping productivity and the structure of work.

This moment is not about “irrational exuberance”, but rather the compounding product of theoretical insight, algorithmic audacity, and relentless engineering: the engine behind the next wave of economic and social transformation.

read more
Quote: Andrej Karpathy – Ex-OpenAI, Ex-Tesla AI

Quote: Andrej Karpathy – Ex-OpenAI, Ex-Tesla AI

“AI is so wonderful because there have been a number of seismic shifts where the entire field has suddenly looked a different way. I’ve maybe lived through two or three of those. I still think there will continue to be some because they come with almost surprising regularity.” – Andrej Karpathy – Ex-OpenAI, Ex-Tesla AI

Andrej Karpathy, one of the most recognisable figures in artificial intelligence, has spent his career at the epicentre of the field’s defining moments in both research and large-scale industry deployment.

Karpathy’s background is defined by deep technical expertise and a front-row seat to AI’s rapid evolution. Having completed his PhD at Stanford and held pivotal research positions, he worked alongside Geoffrey Hinton at the University of Toronto during the early surge of deep learning. His career encompasses key roles at Tesla, where he led the Autopilot vision team, and at OpenAI, contributing to some of the world’s most prominent large language models and generative AI systems. This vantage point has allowed him to participate in, and reflect upon, the discipline’s “seismic shifts”.

Karpathy’s narrative has been shaped by three inflection points:

  • The emergence of deep neural networks from a niche field to mainstream AI, spearheaded by the success of AlexNet and the subsequent shift of the research community toward neural architectures.
  • The drive towards agent-based systems, with early enthusiasm for reinforcement learning (RL) and game-based environments (such as Atari and Go). Karpathy himself was cautious about the utility of games as the true path to intelligence, focusing instead on agents acting within the real digital world.
  • The rise of large language models (LLMs)—transformers trained on vast internet datasets, shifting the locus of AI from task-specific systems to general-purpose models with the ability to perform a broad suite of tasks, and in-context learning.

His reflection on these ‘regular’ paradigm shifts arises from lived experience: “I’ve maybe lived through two or three of those. I still think there will continue to be some because they come with almost surprising regularity.” These moments recalibrate assumptions, redirect research priorities, and set new benchmarks for capability. Karpathy’s practical orientation—building “useful things” rather than targeting biological intelligence or pure AGI—shapes his approach to both innovation and scepticism about hype.

Context of the Quote
In his conversation with podcaster Dwarkesh Patel, Karpathy elaborates on the recurring nature of breakthroughs. He contrasts AI’s rapid, transformative leaps with other scientific fields, noting that in machine learning, scaling up data, compute, and novel architectures can yield abrupt improvements—yet each wave often triggers both excessive optimism and later recalibration. A major point he raises is the lack of linearity: the field does not “smoothly” approach AGI, but rather proceeds via discontinuities, often catalysed by new ideas or techniques that were previously out of favour or overlooked.

Karpathy relates how, early in his career, neural networks were a marginal interest and large-scale “representation learning” was only beginning to be considered viable by a minority in the community. With the advent of AlexNet, the landscape shifted overnight, rapidly making previous assumptions obsolete. Later, the pursuit of RL-driven agents led to a phase where entire research agendas were oriented toward gameplay and synthetic environments—another phase later superseded by the transformer revolution and language models. Karpathy reflects candidly on earlier missteps, as well as the discipline’s collective tendency to over- or under-predict the timetable and trajectory of progress.

Leading Theorists and Intellectual Heritage
The AI revolutions Karpathy describes are inseparable from the influential figures and ideas that have shaped each phase:

  • Geoffrey Hinton: Hailed as the “godfather of AI”, Hinton was instrumental in deep learning’s breakthrough, advancing techniques for training multilayered neural networks and championing representation learning against prevailing orthodoxy.
  • Yann LeCun: Developed convolutional neural networks (CNNs), foundational for computer vision and the 2010s wave of deep learning success.
  • Yoshua Bengio: Co-architect of the deep learning movement and a key figure in developing unsupervised and generative models.
  • Richard Sutton: Principal proponent of reinforcement learning, Sutton articulated the value of “animal-like” intelligence: learning from direct interaction with environments, reward, and adaptation. Sutton’s perspective frequently informs debates about the relationship between model architectures and living intelligence, encouraging a focus on agents and lifelong learning.

Karpathy’s own stance is partly a pragmatic response to this heritage: rather than pursuing analogues of biological brains, he views the productive path as building digital “ghosts”—entities that learn by imitation and are shaped by patterns in data, rather than evolutionary processes.

Beyond individual theorists, the field’s quantum leaps are rooted in a culture of intellectual rivalry and rapid intellectual cross-pollination:

  • The convolutional and recurrent networks of the 2010s pushed the boundaries of what neural networks could do.
  • The development and scaling of transformer-based architectures (as in Google’s “Attention is All You Need”) dramatically changed both natural language processing and the structure of the field itself.
  • The introduction of algorithms for in-context learning and large-scale unsupervised pre-training marked a break with hand-crafted representation engineering.

The Architecture of Progress: Seismic Shifts and Pragmatic Tension
Karpathy’s insight is that these shifts are not just about faster hardware or bigger datasets; they reflect the field’s unique ecology—where new methods can rapidly become dominant and overturn accumulated orthodoxy. The combination of open scientific exchange, rapid deployment, and intense commercialisation creates fertile ground for frequent realignment.

His observation on the “regularity” of shifts also signals a strategic realism: each wave brings both opportunity and risk. New architectures (such as transformers or large reinforcement learning agents) frequently overshoot expectations before their real limitations become clear. Karpathy remains measured on both promise and limitation—anticipating continued progress, but cautioning against overpredictions and hype cycles that fail to reckon with the “march of nines” needed to reach true reliability and impact.

Closing Perspective
The context of Karpathy’s quote is an AI ecosystem that advances not through steady accretion, but in leaps—each driven by conceptual, technical, and organisational realignments. As such, understanding progress in AI demands both technical literacy and historical awareness: the sharp pivots that have marked past decades are likely to recur, with equally profound effects on how intelligence is conceived, built, and deployed.

read more
Quote: Jonathan Ross – CEO Groq

Quote: Jonathan Ross – CEO Groq

“The countries that control compute will control AI. You cannot have compute without energy.” – Jonathan Ross – CEO Groq

Jonathan Ross stands at the intersection of geopolitics, energy economics, and technological determinism. As founder and CEO of Groq, the Silicon Valley firm challenging Nvidia’s dominance in AI infrastructure, Ross articulated a proposition of stark clarity during his September 2025 appearance on Harry Stebbings’ 20VC podcast: “The countries that control compute will control AI. You cannot have compute without energy.”

This observation transcends technical architecture. Ross is describing the emergence of a new geopolitical currency—one where computational capacity, rather than traditional measures of industrial might, determines economic sovereignty and strategic advantage in the 21st century. His thesis rests on an uncomfortable reality: artificial intelligence, regardless of algorithmic sophistication or model architecture, cannot function without the physical substrate of compute. And compute, in turn, cannot exist without abundant, reliable energy.

The Architecture of Advantage

Ross’s perspective derives from direct experience building the infrastructure that powers modern AI. At Google, he initiated what became the Tensor Processing Unit (TPU) project—custom silicon that allowed the company to train and deploy machine learning models at scale. This wasn’t academic research; it was the foundation upon which Google’s AI capabilities were built. When Amazon and Microsoft attempted to recruit him in 2016 to develop similar capabilities, Ross recognised a pattern: the concentration of advanced AI compute in too few hands represented a strategic vulnerability.

His response was to establish Groq in 2016, developing Language Processing Units optimised for inference—the phase where trained models actually perform useful work. The company has since raised over $3 billion and achieved a valuation approaching $7 billion, positioning itself as one of Nvidia’s most credible challengers in the AI hardware market. But Ross’s ambitions extend beyond corporate competition. He views Groq’s mission as democratising access to compute—creating abundant supply where artificial scarcity might otherwise concentrate power.

The quote itself emerged during a discussion about global AI competitiveness. Ross had been explaining why European nations, despite possessing strong research talent and model development capabilities (Mistral being a prominent example), risk strategic irrelevance without corresponding investment in computational infrastructure and energy capacity. A brilliant model without compute to run it, he argued, will lose to a mediocre model backed by ten times the computational resources. This isn’t theoretical—it’s the lived reality of the current AI landscape, where rate limits and inference capacity constraints determine what services can scale and which markets can be served.

The Energy Calculus

The energy dimension of Ross’s statement carries particular weight. Modern AI training and inference require extraordinary amounts of electrical power. The hyperscalers—Google, Microsoft, Amazon, Meta—are each committing tens of billions of dollars annually to AI infrastructure, with significant portions dedicated to data centre construction and energy provision. Microsoft recently announced it wouldn’t make certain GPU clusters available through Azure because the company generated higher returns using that compute internally rather than renting it to customers. This decision, more than any strategic presentation, reveals the economic value density of AI compute.

Ross draws explicit parallels to the early petroleum industry: a period of chaotic exploration where a few “gushers” delivered extraordinary returns whilst most ventures yielded nothing. In this analogy, compute is the new oil—a fundamental input that determines economic output and strategic positioning. But unlike oil, compute demand doesn’t saturate. Ross describes AI demand as “insatiable”: if OpenAI or Anthropic received twice their current inference capacity, their revenue would nearly double within a month. The bottleneck isn’t customer appetite; it’s supply.

This creates a concerning dynamic for nations without indigenous energy abundance or the political will to develop it. Ross specifically highlighted Europe’s predicament: impressive AI research capabilities undermined by insufficient energy infrastructure and regulatory hesitance around nuclear power. He contrasted this with Norway’s renewable capacity (80% wind utilisation) or Japan’s pragmatic reactivation of nuclear facilities—examples of countries aligning energy policy with computational ambition. The message is uncomfortable but clear: technical sophistication in model development cannot compensate for material disadvantage in energy and compute capacity.

Strategic Implications

The geopolitical dimension becomes more acute when considering China’s position. Ross noted that whilst Chinese models like DeepSeek may be cheaper to train (through various optimisations and potential subsidies), they remain more expensive to run at inference—approximately ten times more costly per token generated. This matters because inference, not training, determines scalability and market viability. China can subsidise AI deployment domestically, but globally—what Ross terms the “away game”—cost structure determines competitiveness. Countries cannot simply construct nuclear plants at will; energy infrastructure takes decades to build.

This asymmetry creates opportunity for nations with existing energy advantages. The United States, despite higher nominal costs, benefits from established infrastructure and diverse energy sources. However, Ross’s framework suggests this advantage is neither permanent nor guaranteed. Control over compute requires continuous investment in both silicon capability and energy generation. Nations that fail to maintain pace risk dependency—importing not just technology, but the capacity for economic and strategic autonomy.

The corporate analogy proves instructive. Ross predicts that every major AI company—OpenAI, Anthropic, Google, and others—will eventually develop proprietary chips, not necessarily to outperform Nvidia technically, but to ensure supply security and strategic control. Nvidia currently dominates not purely through superior GPU architecture, but through control of high-bandwidth memory (HBM) supply chains. Building custom silicon allows organisations to diversify supply and avoid allocation constraints that might limit their operational capacity. What applies to corporations applies equally to nations: vertical integration in compute infrastructure is increasingly a prerequisite for strategic autonomy.

The Theorists and Precedents

Ross’s thesis echoes several established frameworks in economic and technological thought, though he synthesises them into a distinctly contemporary proposition.

Harold Innis, the Canadian economic historian, developed the concept of “staples theory” in the 1930s and 1940s—the idea that economies organised around the extraction and export of key commodities (fur, fish, timber, oil) develop institutional structures, trade relationships, and power dynamics shaped by those materials. Innis later extended this thinking to communication technologies in works like Empire and Communications (1950) and The Bias of Communication (1951), arguing that the dominant medium of a society shapes its political and social organisation. Ross’s formulation applies Innisian logic to computational infrastructure: the nations that control the “staples” of the AI economy—energy and compute—will shape the institutional and economic order that emerges.

Carlota Perez, the Venezuelan-British economist, provided a framework for understanding technological revolutions in Technological Revolutions and Financial Capital (2002). Perez identified how major technological shifts (steam power, railways, electricity, mass production, information technology) follow predictable patterns: installation phases characterised by financial speculation and infrastructure building, followed by deployment phases where the technology becomes economically productive. Ross’s observation about current AI investment—massive capital expenditure by hyperscalers, uncertain returns, experimental deployment—maps cleanly onto Perez’s installation phase. The question, implicit in his quote, is which nations will control the infrastructure when the deployment phase arrives and returns become tangible.

W. Brian Arthur, economist and complexity theorist, articulated the concept of “increasing returns” in technology markets through works like Increasing Returns and Path Dependence in the Economy (1994). Arthur demonstrated how early advantages in technology sectors compound through network effects, learning curves, and complementary ecosystems—creating winner-take-most dynamics rather than the diminishing returns assumed in classical economics. Ross’s emphasis on compute abundance follows this logic: early investment in computational infrastructure creates compounding advantages in AI capability, which drives economic returns, which fund further compute investment. Nations entering this cycle late face escalating barriers to entry.

Joseph Schumpeter, the Austrian-American economist, introduced the concept of “creative destruction” in Capitalism, Socialism and Democracy (1942)—the idea that economic development proceeds through radical innovation that renders existing capital obsolete. Ross explicitly invokes Schumpeterian dynamics when discussing the risk that next-generation AI chips might render current hardware unprofitable before it amortises. This uncertainty amplifies the strategic calculus: nations must invest in compute infrastructure knowing that technological obsolescence might arrive before economic returns materialise. Yet failing to invest guarantees strategic irrelevance.

William Stanley Jevons, the 19th-century English economist, observed what became known as Jevons Paradox in The Coal Question (1865): as technology makes resource use more efficient, total consumption typically increases rather than decreases because efficiency makes the resource more economically viable for new applications. Ross applies this directly to AI compute, noting that as inference becomes cheaper (through better chips or more efficient models), demand expands faster than costs decline. This means the total addressable market for compute grows continuously—making control over production capacity increasingly valuable.

Nicholas Georgescu-Roegen, the Romanian-American economist, pioneered bioeconomics and introduced entropy concepts to economic analysis in The Entropy Law and the Economic Process (1971). Georgescu-Roegen argued that economic activity is fundamentally constrained by thermodynamic laws—specifically, that all economic processes dissipate energy and cannot be sustained without continuous energy inputs. Ross’s insistence that “you cannot have compute without energy” is pure Georgescu-Roegen: AI systems, regardless of algorithmic elegance, are bound by physical laws. Compute is thermodynamically expensive—training large models requires megawatts, inference at scale requires sustained power generation. Nations without access to abundant energy cannot sustain AI economies, regardless of their talent or capital.

Mancur Olson, the American economist and political scientist, explored collective action problems and the relationship between institutional quality and economic outcomes in works like The Rise and Decline of Nations (1982). Olson demonstrated how established interest groups can create institutional sclerosis that prevents necessary adaptation. Ross’s observations about European regulatory hesitance and infrastructure underinvestment reflect Olsonian dynamics: incumbent energy interests, environmental lobbies, and risk-averse political structures prevent the aggressive nuclear or renewable expansion required for AI competitiveness. Meanwhile, nations with different institutional arrangements (or greater perceived strategic urgency) act more decisively.

Paul Romer, the American economist and Nobel laureate, developed endogenous growth theory, arguing in works like “Endogenous Technological Change” (1990) that economic growth derives from deliberate investment in knowledge and technology rather than external factors. Romer’s framework emphasises the non-rivalry of ideas (knowledge can be used by multiple actors simultaneously) but the rivalry of physical inputs required to implement them. Ross’s thesis fits perfectly: AI algorithms can be copied and disseminated, but the computational infrastructure to deploy them at scale cannot. This creates a fundamental asymmetry that determines economic power.

The Historical Pattern

History provides sobering precedents for resource-driven geopolitical competition. Britain’s dominance in the 19th century rested substantially on coal abundance that powered industrial machinery and naval supremacy. The United States’ 20th-century ascendance correlated with petroleum access and the industrial capacity to refine and deploy it. Oil-dependent economies in the Middle East gained geopolitical leverage disproportionate to their population or industrial capacity purely through energy reserves.

Ross suggests we are witnessing the emergence of a similar dynamic, but with a critical difference: AI compute is both resource-intensive (requiring enormous energy) and productivity-amplifying (making other economic activity more efficient). This creates a multiplicative effect where compute advantages compound through both direct application (better AI services) and indirect effects (more efficient production of goods and services across the economy). A nation with abundant compute doesn’t just have better chatbots—it has more efficient logistics, agricultural systems, manufacturing processes, and financial services.

The “away game” concept Ross introduced during the podcast discussion adds a critical dimension. China, despite substantial domestic AI investment and capabilities, faces structural disadvantages in global competition because international customers cannot simply replicate China’s energy subsidies or infrastructure. This creates opportunities for nations with more favourable cost structures or energy profiles, but only if they invest in both compute capacity and energy generation.

The Future Ross Envisions

Throughout the podcast, Ross painted a vision of AI-driven abundance that challenges conventional fears of technological unemployment. He predicts labour shortages, not mass unemployment, driven by three mechanisms: deflationary pressure (AI makes goods and services cheaper), workforce opt-out (people work less as living costs decline), and new industry creation (entirely new job categories emerge, like “vibe coding”—programming through natural language rather than formal syntax).

This optimistic scenario depends entirely on computational abundance. If compute remains scarce and concentrated, AI benefits accrue primarily to those controlling the infrastructure. Ross’s mission with Groq—creating faster deployment cycles (six months versus two years for GPUs), operating globally distributed data centres, optimising for cost efficiency rather than margin maximisation—aims to prevent that concentration. But the same logic applies at the national level. Countries without indigenous compute capacity will import AI services, capturing some productivity benefits but remaining dependent on external providers for the infrastructure that increasingly underpins economic activity.

The comparison Ross offers—LLMs as “telescopes of the mind”—is deliberately chosen. Galileo’s telescope revolutionised human understanding but required specific material capabilities to construct and use. Nations without optical manufacturing capacity could not participate in astronomical discovery. Similarly, nations without computational and energy infrastructure cannot participate fully in the AI economy, regardless of their algorithmic sophistication or research talent.

Conclusion

Ross’s statement—”The countries that control compute will control AI. You cannot have compute without energy”—distils a complex geopolitical and economic reality into stark clarity. It combines Innisian materialism (infrastructure determines power), Schumpeterian dynamism (innovation renders existing capital obsolete), Jevonsian counterintuition (efficiency increases total consumption), and Georgescu-Roegen’s thermodynamic constraints (economic activity requires energy dissipation).

The implications are uncomfortable for nations unprepared to make the necessary investments. Technical prowess in model development provides no strategic moat if the computational infrastructure to deploy those models remains controlled elsewhere. Energy abundance, or the political will to develop it, becomes a prerequisite for AI sovereignty. And AI sovereignty increasingly determines economic competitiveness across sectors.

Ross occupies a unique vantage point—neither pure academic nor disinterested observer, but an operator building the infrastructure that will determine whether his prediction proves correct. Groq’s valuation and customer demand suggest the market validates his thesis. Whether nations respond with corresponding urgency remains an open question. But the framework Ross articulates will likely define strategic competition for the remainder of the decade: compute as currency, energy as prerequisite, and algorithmic sophistication as necessary but insufficient for competitive advantage.

read more
Quote: J.W. Stephens – Author

Quote: J.W. Stephens – Author

“Be the person your dog thinks you are!” – J.W. Stephens – Author

The quote “Be the person your dog thinks you are!” represents a profound philosophical challenge wrapped in disarming simplicity. It invites us to examine the gap between our idealised selves and our everyday reality through the lens of unconditional canine devotion. This seemingly light-hearted exhortation carries surprising depth when examined within the broader context of authenticity, aspiration and the moral psychology of personal development.

The Author and the Quote’s Origins

J.W. Stephens, a seventh-generation native Texan, has spent considerable time travelling and living across various locations in Texas and internationally. Whilst the search results provide limited biographical detail about this particular author, the quote itself reveals a distinctively American sensibility—one that combines practical wisdom with accessible moral instruction. The invocation of dogs as moral exemplars reflects a cultural tradition deeply embedded in American life, where the human-canine bond serves as both comfort and conscience.

The brilliance of Stephens’ formulation lies in its rhetorical structure. By positioning the dog’s perception as the aspirational standard, the quote accomplishes several objectives simultaneously: it acknowledges our frequent moral shortcomings, suggests that we already possess knowledge of higher standards, and implies that achieving those standards is within reach. The dog becomes both witness and ideal reader—uncritical yet somehow capable of perceiving our better nature.

The quote functions as what philosophers might term a “regulative ideal”—not a description of what we are, but a vision of what we might become. Dogs, in their apparent inability to recognise human duplicity or moral inconsistency, treat their owners as wholly trustworthy, infinitely capable, and fundamentally good. This perception, whether accurate or illusory, creates a moral challenge: can we rise to meet it?

Philosophical Foundations: Authenticity and the Divided Self

The intellectual lineage underpinning this seemingly simple maxim extends deep into Western philosophical tradition, touching upon questions of authenticity, self-knowledge, and moral psychology that have preoccupied thinkers for millennia.

Søren Kierkegaard (1813-1855) stands as perhaps the most important theorist of authenticity in Western philosophy. The Danish philosopher argued that modern life creates a condition he termed “despair”—not necessarily experienced as anguish, but as a fundamental disconnection from one’s true self. Kierkegaard distinguished between the aesthetic, ethical, and religious stages of existence, arguing that most people remain trapped in the aesthetic stage, living according to immediate gratification and social conformity rather than choosing themselves authentically. His concept of “becoming who you are” anticipates Stephens’ formulation, though Kierkegaard’s vision is considerably darker and more demanding. For Kierkegaard, authentic selfhood requires a “leap of faith” and acceptance of radical responsibility for one’s choices. The dog’s unwavering faith in its owner might serve, in Kierkegaardian terms, as a model of the absolute commitment required for authentic existence.

Jean-Paul Sartre (1905-1980) developed Kierkegaard’s insights in a secular, existentialist direction. Sartre’s notion of “bad faith” (mauvaise foi) describes the human tendency to deceive ourselves about our freedom and responsibility. We pretend we are determined by circumstances, social roles, or past choices when we remain fundamentally free. Sartre argued that consciousness is “condemned to be free”—we cannot escape the burden of defining ourselves through our choices. The gap between who we are and who we claim to be constitutes a form of self-deception Sartre found both universal and contemptible. Stephens’ quote addresses precisely this gap: the dog sees us as we might be, whilst we often live as something less. Sartre would likely appreciate the quote’s implicit demand that we accept responsibility for closing that distance.

Martin Heidegger (1889-1976) approached similar territory through his concept of “authenticity” (Eigentlichkeit) versus “inauthenticity” (Uneigentlichkeit). For Heidegger, most human existence is characterised by “fallenness”—an absorption in the everyday world of “das Man” (the “They” or anonymous public). We live according to what “one does” rather than choosing our own path. Authentic existence requires confronting our own mortality and finitude, accepting that we are “beings-toward-death” who must take ownership of our existence. The dog’s perspective, unburdened by social conformity and living entirely in the present, might represent what Heidegger termed “dwelling”—a mode of being that is at home in the world without falling into inauthenticity.

The Psychology of Self-Perception and Moral Development

Moving from continental philosophy to empirical psychology, several theorists have explored the mechanisms by which we maintain multiple versions of ourselves and how we might reconcile them.

Carl Rogers (1902-1987), the founder of person-centred therapy, developed a comprehensive theory of the self that illuminates Stephens’ insight. Rogers distinguished between the “real self” (who we actually are) and the “ideal self” (who we think we should be). Psychological health, for Rogers, requires “congruence”—alignment between these different self-concepts. When the gap between real and ideal becomes too wide, we experience anxiety and employ defence mechanisms to protect our self-image. Rogers believed that unconditional positive regard—accepting someone fully without judgment—was essential for psychological growth. The dog’s perception of its owner represents precisely this unconditional acceptance, creating what Rogers termed “conditions of worth” that are entirely positive. Paradoxically, this complete acceptance might free us to change precisely because we feel safe enough to acknowledge our shortcomings.

Albert Bandura (born 1925) developed social learning theory and the concept of self-efficacy, which bears directly on Stephens’ formulation. Bandura argued that our beliefs about our capabilities significantly influence what we attempt and accomplish. When we believe others see us as capable (as dogs manifestly do), we are more likely to attempt difficult tasks and persist through obstacles. The dog’s unwavering confidence in its owner might serve as what Bandura termed “vicarious experience”—seeing ourselves succeed through another’s eyes increases our own self-efficacy beliefs. Moreover, Bandura’s later work on moral disengagement explains how we rationalise behaviour that conflicts with our moral standards. The dog’s perspective, by refusing such disengagement, might serve as a corrective to self-justification.

Carol Dweck (born 1946) has explored how our beliefs about human qualities affect achievement and personal development. Her distinction between “fixed” and “growth” mindsets illuminates an important dimension of Stephens’ quote. A fixed mindset assumes that qualities like character, intelligence, and moral worth are static; a growth mindset sees them as developable through effort. The dog’s perception suggests a growth-oriented view: it sees potential rather than limitation, possibility rather than fixed character. The quote implies that we can become what the dog already believes us to be—a quintessentially growth-minded position.

Moral Philosophy and the Ethics of Character

The quote also engages fundamental questions in moral philosophy about the nature of virtue and how character develops.

Aristotle (384-322 BCE) provides the foundational framework for understanding character development in Western thought. His concept of eudaimonia (often translated as “flourishing” or “the good life”) centres on the cultivation of virtues through habituation. For Aristotle, we become virtuous by practising virtuous actions until they become second nature. The dog’s perception might serve as what Aristotle termed the “great-souled man’s” self-regard—not arrogance but appropriate recognition of one’s potential for excellence. However, Aristotle would likely caution that merely aspiring to virtue is insufficient; one must cultivate the practical wisdom (phronesis) to know what virtue requires in specific circumstances and the habituated character to act accordingly.

Immanuel Kant (1724-1804) approached moral philosophy from a radically different angle, yet his thought illuminates Stephens’ insight in unexpected ways. Kant argued that morality stems from rational duty rather than inclination or consequence. The famous categorical imperative demands that we act only according to maxims we could will to be universal laws. Kant’s moral agent acts from duty, not because they feel like it or because they fear consequences. The gap between our behaviour and the dog’s perception might be understood in Kantian terms as the difference between acting from inclination (doing good when convenient) and acting from duty (doing good because it is right). The dog, in its innocence, cannot distinguish these motivations—it simply expects consistent goodness. Rising to meet that expectation would require developing what Kant termed a “good will”—the disposition to do right regardless of inclination.

Lawrence Kohlberg (1927-1987) developed a stage theory of moral development that explains how moral reasoning evolves from childhood through adulthood. Kohlberg identified six stages across three levels: pre-conventional (focused on rewards and punishment), conventional (focused on social approval and law), and post-conventional (focused on universal ethical principles). The dog’s expectation might be understood as operating at a pre-conventional level—it assumes goodness without complex reasoning. Yet meeting that expectation could require post-conventional thinking: choosing to be good not because others are watching but because we have internalised principles of integrity and compassion. The quote thus invites us to use a simple, pre-moral faith as leverage for developing genuine moral sophistication.

Contemporary Perspectives: Positive Psychology and Virtue Ethics

Recent decades have seen renewed interest in character and human flourishing, providing additional context for understanding Stephens’ insight.

Martin Seligman (born 1942), founder of positive psychology, has shifted psychological focus from pathology to wellbeing. His PERMA model identifies five elements of flourishing: Positive emotion, Engagement, Relationships, Meaning, and Accomplishment. The human-dog relationship exemplifies several of these elements, particularly the relationship component. Seligman’s research on “learned optimism” suggests that how we explain events to ourselves affects our wellbeing and achievement. The dog’s relentlessly optimistic view of its owner might serve as a model of the explanatory style Seligman advocates—one that sees setbacks as temporary and successes as reflective of stable, positive qualities.

Christopher Peterson (1950-2012) and Martin Seligman collaborated to identify character strengths and virtues across cultures, resulting in the Values in Action (VIA) classification. Their research identified 24 character strengths organised under six core virtues: wisdom, courage, humanity, justice, temperance, and transcendence. The quote implicitly challenges us to develop these strengths not because doing so maximises utility or fulfils duty, but because integrity demands that our actions align with our self-understanding. The dog sees us as possessing these virtues; the challenge is to deserve that vision.

Alasdair MacIntyre (born 1929) has argued for recovering Aristotelian virtue ethics in modern life. MacIntyre contends that the Enlightenment project of grounding morality in reason alone has failed, leaving us with emotivism—the view that moral judgments merely express feelings. He advocates returning to virtue ethics situated within narrative traditions and communities of practice. The dog-owner relationship might be understood as one such practice—a context with implicit standards and goods internal to it (loyalty, care, companionship) that shape character over time. Becoming worthy of the dog’s trust requires participating authentically in this practice rather than merely going through the motions.

The Human-Animal Bond as Moral Mirror

The specific invocation of dogs, rather than humans, as moral arbiters merits examination. This choice reflects both cultural realities and deeper philosophical insights about the nature of moral perception.

Dogs occupy a unique position in human society. Unlike wild animals, they have co-evolved with humans for thousands of years, developing sophisticated abilities to read human gestures, expressions, and intentions. Yet unlike humans, they appear incapable of the complex social calculations that govern human relationships—judgement tempered by self-interest, conditional approval based on social status, or critical evaluation moderated by personal advantage.

Emmanuel Levinas (1906-1995) developed an ethics based on the “face-to-face” encounter with the Other, arguing that the face of the other person makes an ethical demand on us that precedes rational calculation. Whilst Levinas focused on human faces, his insight extends to our relationships with dogs. The dog’s upturned face, its evident trust and expectation, creates an ethical demand: we are called to respond to its vulnerability and faith. The dog cannot protect itself from our betrayal; it depends entirely on our goodness. This radical vulnerability and trust creates what Levinas termed the “infinite responsibility” we bear toward the Other.

The dog’s perception is powerful precisely because it is not strategic. Dogs do not love us because they have calculated that doing so serves their interests (though it does). They do not withhold affection to manipulate behaviour (though behavioural conditioning certainly plays a role in the relationship). From the human perspective, the dog’s devotion appears absolute and uncalculating. This creates a moral asymmetry: the dog trusts completely, whilst we retain the capacity for betrayal or manipulation. Stephens’ quote leverages this asymmetry, suggesting that we should honour such trust by becoming worthy of it.

Practical Implications: From Aspiration to Action

The quote’s enduring appeal lies partly in its practical accessibility. Unlike philosophical treatises on authenticity or virtue that can seem abstract and demanding, Stephens offers a concrete, imaginable standard. Most dog owners have experienced the moment of returning home to exuberant welcome, seeing themselves reflected in their dog’s unconditional joy. The gap between that reflection and one’s self-knowledge of moral compromise or character weakness becomes tangible.

Yet the quote’s simplicity risks trivialising genuine moral development. Becoming “the person your dog thinks you are” is not achieved through positive thinking or simple willpower. It requires sustained effort, honest self-examination, and often painful acknowledgment of failure. The philosophical traditions outlined above suggest several pathways:

The existentialist approach demands radical honesty about our freedom and responsibility. We must acknowledge that we choose ourselves moment by moment, that no external circumstance determines our character, and that self-deception about this freedom represents moral failure. The dog’s trust becomes a call to authentic choice.

The Aristotelian approach emphasises habituation and practice. We must identify the virtues we lack, create situations that require practising them, and persist until virtuous behaviour becomes natural. The dog’s expectation provides motivation for this long-term character development.

The psychological approach focuses on congruence and self-efficacy. We must reduce the gap between real and ideal self through honest self-assessment and incremental change, using the dog’s confidence as a source of belief in our capacity to change.

The virtue ethics approach situates character development within practices and traditions. The dog-owner relationship itself becomes a site for developing virtues like responsibility, patience, and compassion through daily engagement.

The Quote in Contemporary Context

Stephens’ formulation resonates particularly in an era characterised by anxiety about authenticity. Social media creates pressure to curate idealised self-presentations whilst simultaneously exposing the gap between image and reality. Political and institutional leaders frequently fail to live up to professed values, creating cynicism about whether integrity is possible or even desirable. In this context, the dog’s uncomplicated faith offers both comfort and challenge—comfort that somewhere we are seen as fundamentally good, challenge that we might actually become so.

The quote also speaks to contemporary concerns about meaning and purpose. In a secular age lacking consensus on ultimate values, the question “How should I live?” lacks obvious answers. Stephens bypasses theological and philosophical complexities by offering an existentially grounded response: live up to the best version of yourself as reflected in uncritical devotion. This moves the question from abstract principle to lived relationship, from theoretical ethics to embodied practice.

Moreover, the invocation of dogs rather than humans as moral mirrors acknowledges a therapeutic insight: sometimes we need non-judgmental acceptance before we can change. The dog provides that acceptance automatically, creating psychological safety within which development becomes possible. In an achievement-oriented culture that often ties worth to productivity and success, the dog’s valuation based simply on existence—you are wonderful because you are you—offers profound relief and, paradoxically, motivation for growth.

The quote ultimately works because it short-circuits our elaborate mechanisms of self-justification. We know we are not as good as our dogs think we are. We know this immediately and intuitively, without needing philosophical argument. The quote simply asks: what if you were? What if you closed that gap? The question haunts precisely because the answer seems simultaneously impossible and within reach—because we have glimpsed that better self in our dog’s eyes and cannot quite forget it.

read more

Download brochure

Introduction brochure

What we do, case studies and profiles of some of our amazing team.

Download

Our latest podcasts on Spotify

Sign up for our newsletters - free

Global Advisors | Quantified Strategy Consulting