| |
|
A daily bite-size selection of top business content.
PM edition. Issue number 1140
Latest 10 stories. Click the button for more.
|
| |
“I worry a lot about ... Africa. And the reason is: how does Africa benefit from [AI]? There's obviously some benefit of globalisation, better crop yields, and so forth. But without stable governments, strong universities, major industrial structures - which Africa, with some exceptions, lacks - it's going to lag.” - Dr Eric Schmidt - Former Google CEO
Dr Eric Schmidt’s observation stems from his experience at the highest levels of the global technology sector and his acute awareness of both the promise and the precariousness of the coming AI age. His warning about Africa’s risk of lagging in AI adoption and benefit is rooted in today’s uneven technological landscape and long-standing structural challenges facing the continent.
About Dr Eric Schmidt
Dr Eric Schmidt is one of the most influential technology executives of the 21st century. As CEO of Google from 2001 to 2011, he oversaw Google’s transformation from a Silicon Valley start-up into a global technology leader. Schmidt provided the managerial and strategic backbone that enabled Google’s explosive growth, product diversification, and a culture of robust innovation. After Google, he continued as Executive Chairman and Technical Advisor through Google’s restructuring into Alphabet, before transitioning to philanthropic and strategic advisory work. Notably, Schmidt has played significant roles in US national technology strategy, chairing the US National Security Commission on Artificial Intelligence and founding the bipartisan Special Competitive Studies Project, which advises on the intersections of AI, security, and economic competitiveness.
With a background encompassing leading roles at Sun Microsystems, Novell, and advisory positions at Xerox PARC and Bell Labs, Schmidt’s career reflects deep immersion in technology and innovation. He is widely regarded as a strategic thinker on the global opportunities and risks of technology, regularly offering perspective on how AI, digital infrastructure, and national competitiveness are shaping the future economic order.
Context of the Quotation
Schmidt’s remark appeared during a high-level panel at the Future Investment Initiative (FII9), in conversation with Dr Fei-Fei Li of Stanford and Peter Diamandis. The discussion centred on “What Happens When Digital Superintelligence Arrives?” and explored the likely economic, social, and geopolitical consequences of rapid AI advancement.
In this context, Schmidt identified a core risk: that AI’s benefits will accrue unevenly across borders, amplifying existing inequalities. He emphasised that while powerful AI tools may drive exceptional economic value and efficiencies—potentially in the trillions of dollars—these gains are concentrated by network effects, investment, and infrastructure. Schmidt singled out Africa as particularly vulnerable: absent stable governance, strong research universities, or robust industrial platforms—critical prerequisites for technology absorption—Africa faces the prospect of deepening relative underdevelopment as the AI era accelerates. The comment reflects a broader worry in technology and policy circles: global digitisation is likely to amplify rather than repair structural divides unless deliberate action is taken.
Leading Theorists and Thinking on the Subject
The dynamics Schmidt describes are at the heart of an emerging literature on the “AI divide,” digital colonialism, and the geopolitics of AI. Prominent thinkers in these debates include:
- Professor Fei-Fei Li
A leading AI scientist, Dr Li has consistently framed AI’s potential as contingent on human-centred design and equitable access. She highlights the distinction between the democratisation of access (e.g., cheaper healthcare or education via AI) and actual shared prosperity—which hinges on local capacity, policy, and governance. Her work underlines that technical progress does not automatically result in inclusive benefit, validating Schmidt’s concerns.
- Kate Crawford and Timnit Gebru
Both have written extensively on the risks of algorithmic exclusion, surveillance, and the concentration of AI expertise within a handful of countries and firms. In particular, Crawford’s Atlas of AI and Gebru’s leadership in AI ethics foreground how global AI development mirrors deeper resource and power imbalances.
- Nick Bostrom and Stuart Russell
Their theoretical contributions address the broader existential and ethical challenges of artificial superintelligence, but they also underscore risks of centralised AI power—technically and economically.
- Ndubuisi Ekekwe, Bitange Ndemo, and Nanjira Sambuli
These African thought leaders and scholars examine how Africa can leapfrog in digital adoption but caution that profound barriers—structural, institutional, and educational—must be addressed for the continent to benefit from AI at scale.
- Eric Schmidt himself has become a touchstone in policy/tech strategy circles, having co-chaired the US National Security Commission on Artificial Intelligence. The Commission’s reports warned of a bifurcated world where AI capabilities—and thus economic and security advantages—are ever more concentrated.
Structural Elements Behind the Quote
Schmidt’s remark draws attention to a convergence of factors:
- Institutional robustness
Long-term AI prosperity requires stable governments, responsive regulatory environments, and a track record of supporting investment and innovation. This is lacking in many, though not all, of Africa’s economies.
- Strong universities and research ecosystems
AI innovation is talent- and research-intensive. Weak university networks limit both the creation and absorption of advanced technologies.
- Industrial and technological infrastructure
A mature industrial base enables countries and companies to adapt AI for local benefit. The absence of such infrastructure often results in passive consumption of foreign technology, forgoing participation in value creation.
- Network effects and tech realpolitik
Advanced AI tools, data centres, and large-scale compute power are disproportionately located in a few advanced economies. The ability to partner with these “hyperscalers”—primarily in the US—shapes national advantage. Schmidt argues that regions which fail to make strategic investments or partnerships risk being left further behind.
Summary
Schmidt’s statement is not simply a technical observation but an acute geopolitical and developmental warning. It reflects current global realities where AI’s arrival promises vast rewards, but only for those with the foundational economic, political, and intellectual capital in place. For policy makers, investors, and researchers, the implication is clear: bridging the digital-structural gap requires not only technology transfer but also building resilient, adaptive institutions and talent pipelines that are locally grounded.

|
| |
| |
“We need something like 10 terawatts in the next 20 years to make LLM systems truly useful to everyone... Nvidia would need to 100× output... You basically need to fill Nevada with solar panels to provide 10 terawatts of power, at a cost around the world’s GDP. Totally crazy.” - Trevor McCourt - Extropic CTO
Trevor McCourt, Chief Technology Officer and co-founder of Extropic, has emerged as a leading voice articulating a paradox at the heart of artificial intelligence advancement: the technology that promises to democratise intelligence across the planet may, in fact, be fundamentally unscalable using conventional infrastructure. His observation about the terawatt imperative captures this tension with stark clarity—a reality increasingly difficult to dismiss as speculative.
Who Trevor McCourt Is
McCourt brings a rare convergence of disciplinary expertise to his role. Trained in mechanical engineering at the University of Waterloo (graduating 2015) and holding advanced credentials from the Massachusetts Institute of Technology (2020), he combines rigorous physical intuition with deep software systems architecture. Prior to co-founding Extropic, McCourt worked as a Principal Software Engineer, establishing a track record of delivering infrastructure at scale: he designed microservices-based cloud platforms that improved deployment speed by 40% whilst reducing operational costs by 30%, co-invented a patented dynamic caching algorithm for distributed systems, and led open-source initiatives that garnered over 500 GitHub contributors.
This background—spanning mechanical systems, quantum computation, backend infrastructure, and data engineering—positions McCourt uniquely to diagnose what others in the AI space have overlooked: that energy is not merely a cost line item but a binding physical constraint on AI's future deployment model.
Extropic, which McCourt co-founded alongside Guillaume Verdon (formerly a quantum technology lead at Alphabet's X division), closed a $14.1 million Series Seed funding round in 2023, led by Kindred Ventures and backed by institutional investors including Buckley Ventures, HOF Capital, and OSS Capital. The company now stands at approximately 15 people distributed across integrated circuit design, statistical physics research, and machine learning—a lean team assembled to pursue what McCourt characterises as a paradigm shift in compute architecture.
The Quote in Strategic Context
McCourt's assertion that "10 terawatts in the next 20 years" is required for universal LLM deployment, coupled with his observation that this would demand filling Nevada with solar panels at a cost approaching global GDP, represents far more than rhetorical flourish. It is the product of methodical back-of-the-envelope engineering calculation.
His reasoning unfolds as follows:
From Today's Baseline to Mass Deployment:
A text-based assistant operating at today's reasoning capability (approximating GPT-5-Pro performance) deployed to every person globally would consume roughly 20% of the current US electrical grid—approximately 100 gigawatts. This is not theoretical; McCourt derives this from first principles: transformer models consume roughly 2 × (parameters × tokens) floating-point operations; modern accelerators like Nvidia's H100 operate at approximately 0.7 picojoules per FLOP; population-scale deployment implies continuous, always-on inference at scale.
Adding Modalities and Reasoning:
Upgrade that assistant to include video capability at just 1 frame per second (envisioning Meta-style augmented-reality glasses worn by billions), and the grid requirement multiplies by approximately 10×. Enhance the reasoning capability to match models working on the ARC AGI benchmark—problems of human-level reasoning difficulty—and the text assistant alone requires a 10× expansion: 5 terawatts. Push further to expert-level systems capable of solving International Mathematical Olympiad problems, and the requirement reaches 100× the current grid.
Economic Impossibility:
A single gigawatt data centre costs approximately $10 billion to construct. The infrastructure required for mass-market AI deployment rapidly enters the hundreds of trillions of dollars—approaching or exceeding global GDP. Nvidia's current manufacturing capacity would itself require a 100-fold increase to support even McCourt's more modest scenarios.
Physical Reality Check:
Over the past 75 years, US grid capacity has grown remarkably consistently—a nearly linear expansion. Sam Altman's public commitment to building one gigawatt of data centre capacity per week alone would require 3–5× the historical rate of grid growth. Credible plans for mass-market AI acceleration push this requirement into the terawatt range over two decades—a rate of infrastructure expansion that is not merely economically daunting but potentially physically impossible given resource constraints, construction timelines, and raw materials availability.
McCourt's conclusion: the energy path is not simply expensive; it is economically and physically untenable. The paradigm must change.
Intellectual Foundations: Leading Theorists in Energy-Efficient Computing and Probabilistic AI
Understanding McCourt's position requires engagement with the broader intellectual landscape that has shaped thinking about computing's physical limits and probabilistic approaches to machine learning.
Geoffrey Hinton—Pioneering Energy-Based Models and Probabilistic Foundations:
Few figures loom larger in the theoretical background to Extropic's work than Geoffrey Hinton. Decades before the deep learning boom, Hinton developed foundational theory around Boltzmann machines and energy-based models (EBMs)—the conceptual framework that treats learning as the discovery and inference of complex probability distributions. His work posits that machine learning, at its essence, is about fitting a probability distribution to observed data and then sampling from it to generate new instances consistent with that distribution. Hinton's recognition with the 2023 Nobel Prize in Physics for "foundational discoveries and inventions that enable machine learning with artificial neural networks" reflects the deep prescience of this probabilistic worldview. More than theoretical elegance, this framework points toward an alternative computational paradigm: rather than spending vast resources on deterministic matrix operations (the GPU model), a system optimised for efficient sampling from complex distributions would align computation with the statistical nature of intelligence itself.
Michael Frank—Physics of Reversible and Adiabatic Computing:
Michael Frank, a senior scientist now at Vaire (a near-zero-energy chip company), has spent decades at the intersection of physics and computing. His research programme, initiated at MIT in the 1990s and continued at the University of Florida, Florida State, and Sandia National Laboratories, focuses on reversible computing and adiabatic CMOS—techniques aimed at reducing the fundamental energy cost of information processing. Frank's work addresses a deep truth: in conventional digital logic, information erasure is thermodynamically irreversible and expensive, dissipating energy as heat. By contrast, reversible computing minimises such erasure, thereby approaching theoretical energy limits set by physics rather than by engineering convention. Whilst Frank's trajectory and Extropic's diverge in architectural detail, both share the conviction that energy efficiency must be rooted in physical first principles, not merely in engineering optimisation of existing paradigms.
Yoshua Bengio and Chris Bishop—Probabilistic Learning Theory:
Leading researchers in deep generative modelling—including Bengio, Bishop, and others—have consistently advocated for probabilistic frameworks as foundational to machine learning. Their work on diffusion models, variational inference, and sampling-based approaches has legitimised the view that efficient inference is not about raw compute speed but about statistical appropriateness. This theoretical lineage underpins the algorithmic choices at Extropic: energy-based models and denoising thermodynamic models are not novel inventions but rather a return to first principles, informed by decades of probabilistic ML research.
Richard Feynman—Foundational Physics of Computing:
Though less directly cited in contemporary AI discourse, Feynman's 1982 lectures on the physics of computation remain conceptually foundational. Feynman observed that computation's energy cost is ultimately governed by physical law, not engineering ingenuity alone. His observations on reversibility and the thermodynamic cost of irreversible operations informed the entire reversible-computing movement and, by extension, contemporary efforts to align computation with physics rather than against it.
Contemporary Systems Thinkers (Sam Altman, Jensen Huang):
Counterintuitively, McCourt's critique is sharpened by engagement with the visionary statements of industry leaders who have perhaps underestimated energy constraints. Altman's commitment to building one gigawatt of data centre capacity per week, and Huang's roadmaps for continued GPU scaling, have inadvertently validated McCourt's concern: even the most optimistic industrial plans require infrastructure expansion at rates that collide with physical reality. McCourt uses their own projections as evidence for the necessity of paradigm change.
The Broader Strategic Narrative
McCourt's remarks must be understood within a convergence of intellectual and practical pressures:
The Efficiency Plateau:
Digital logic efficiency, measured as energy per operation, has stalled. Transistor capacitance plateaued around the 10-nanometre node; operating voltage is thermodynamically bounded near 300 millivolts. Architectural optimisations (quantisation, sparsity, tensor cores) improve throughput but do not overcome these physical barriers. The era of "free lunch" efficiency gains from Moore's Law miniaturisation has ended.
Model Complexity Trajectory:
Whilst small models have improved at fixed benchmarks, frontier AI systems—those solving novel, difficult problems—continue to demand exponentially more compute. AlphaGo required ~1 exaFLOP per game; AlphaCode required ~100 exaFLOPs per coding problem; the system solving International Mathematical Olympiad problems required ~100,000 exaFLOPs. Model miniaturisation is not offsetting capability ambitions.
Market Economics:
The AI market has attracted trillions in capital precisely because the economic potential is genuine and vast. Yet this same vastness creates the energy paradox: truly universal AI deployment would consume resources incompatible with global infrastructure and economics. The contradiction is not marginal; it is structural.
Extropic's Alternative:
Extropic proposes to escape this local minimum through radical architectural redesign. Thermodynamic Sampling Units (TSUs)—circuits architected as arrays of probabilistic sampling cells rather than multiply-accumulate units—would natively perform the statistical operations that diffusion and generative AI models require. Early simulations suggest energy efficiency improvements of 10,000× on simple benchmarks compared to GPU-based approaches. Hybrid algorithms combining TSUs with compact neural networks on conventional hardware could deliver intermediate gains whilst establishing a pathway toward a fundamentally different compute paradigm.
Why This Matters Now
The quote's urgency reflects a dawning recognition across technical and policy circles that energy is not a peripheral constraint but the central bottleneck determining AI's future trajectory. The choice, as McCourt frames it, is stark: either invest in a radically new architecture, or accept that mass-market AI remains perpetually out of reach—a luxury good confined to the wealthy and powerful rather than a technology accessible to humanity.
This is not mere speculation or provocation. It is engineering analysis grounded in physics, economics, and historical precedent, articulated by someone with the technical depth to understand both the problem and the extraordinary difficulty of solving it.

|
| |
| |
“You have to be very gentle around people. If you're in a leadership position, people hear your words amplified. You have to be very careful what you say and how you say it. You always have to listen to what other people have to say. I genuinely want to know what everybody else thinks.” - Stephen Schwarzman - Blackstone Founder
“You have to be very gentle around people. If you're in a leadership position, people hear your words amplified. You have to be very careful what you say and how you say it. You always have to listen to what other people have to say. I genuinely want to know what everybody else thinks.” - Stephen Schwarzman - Blackstone Founder
Stephen A. Schwarzman’s quote on gentle, thoughtful leadership encapsulates decades spent at the helm of Blackstone—the world’s largest alternative asset manager—where he forged a distinctive culture and process rooted in careful listening, respectful debate, humility, and operational excellence. The story behind this philosophy is marked by formative setbacks, institutional learning, and the broader evolution of modern leadership theory.
Stephen Schwarzman: Background and Significance
Stephen A. Schwarzman, born in 1947 in Philadelphia, rose to prominence after co-founding Blackstone in 1985 with Pete Peterson. Initially, private markets comprised a tiny fraction of institutional portfolios; under his stewardship, allocations in private assets have grown exponentially, fundamentally reshaping global investing. Schwarzman is renowned for his relentless pursuit of operational improvement, risk discipline, and market timing—his mantra, “Don’t lose money,” is enforced by multi-layered approval and rigorous debate.
Schwarzman’s experience as a leader is deeply shaped by early missteps. The Edgecomb Steel investment loss was pivotal: it catalyzed Blackstone’s institutionalized investment committees, de-risking debates, and a culture where anyone may challenge ideas so long as discussion remains fact-based and impersonal. This setback taught him accountability, humility, and the value of systemic learning—his response was not to retreat from risk, but to build a repeatable, challenge-driven process. Crucially, he narrates his own growth from a self-described “C or D executive” to a leader who values gentleness, clarity, humor, and private critique—understanding that words uttered from the top echo powerfully and can shape (or harm) culture.
Beyond technical accomplishments, Schwarzman’s legacy is one of building enduring institutions through codified values: integrity, decency, and hard work. His leadership maxim—“be gentle, clear, and high standard; always listen”—is a template for strong cultures, high performance, and sustainable growth.
The Context of the Quote
The quoted passage emerges from Schwarzman’s reflections on leadership lessons acquired over four decades. Known for candid self-assessment, he openly admits to early struggles with management style but evolved to prioritize humility, care, and active listening. At Blackstone, this meant never criticizing staff in public and always seeking divergent views to inform decisions. He emphasizes that a leader’s words carry amplified weight among teams and stakeholders; thus, intentional communication and genuine listening are essential for nurturing an environment of trust, engagement, and intelligent risk-taking.
This context is inseparable from Blackstone’s broader organizational playbook: institutionalized judgment, structured challenge, and brand-centered culture—all designed to accumulate wisdom, avoid repeating mistakes, and compound long-term value. Schwarzman’s leadership pathway is a case study in the power of personal evolution, open dialogue, and codified norms that outlast the founder himself.
Leading Theorists and Historical Foundations
Schwarzman’s leadership philosophy is broadly aligned with a lineage of thinkers who have shaped modern approaches to management, organizational behavior, and culture:
-
Peter Drucker: Often called the “father of modern management,” Drucker stressed that leadership is defined by results and relationships, not positional power. His work emphasized listening, empowering employees, and the ethical responsibility of those at the top.
-
Warren Bennis: Bennis advanced concepts of authentic leadership, self-awareness, and transparency. He argued that leaders should be vulnerable, model humility, and act as facilitators of collective intelligence rather than commanders.
-
Jim Collins: In “Good to Great,” Collins describes “Level 5 Leaders” as those who combine professional will with personal humility. Collins underscores that amplifying diverse viewpoints and creating cultures of disciplined debate lead to enduring success.
-
Edgar Schein: Schein’s studies of organizational culture reveal that leaders not only set behavioral norms through their actions and words but also shape “cultural DNA” by embedding values of learning, dialogue, and respect.
-
Amy Edmondson: Her pioneering work in psychological safety demonstrates that gentle leadership—rooted in listening and respect—fosters environments where people can challenge ideas, raise concerns, and innovate without fear.
Each of these theorists contributed to the understanding that gentle, attentive leadership is not weakness, but a source of institutional strength, resilience, and competitive advantage. Their concepts mirror the systems at Blackstone: open challenge, private correction, and leadership by example.
Schwarzman’s Distinction and Industry Impact
Schwarzman’s practice stands out in several ways. He institutionalized lessons from mistakes to create robust decision processes and a genuine challenge culture. His insistence on brand-building as strategy—where every decision, hire, and visual artifact reinforces trust—reflects an awareness of the symbolic weight of leadership. Under his guidance, Blackstone’s transformation from a two-person startup into a global giant offers a living illustration of how values, process, and leadership style drive superior, sustainable outcomes.
In summary, the quoted insight is not platitude, but hard-won experience from a legendary founder whose methods echo the best modern thinking on leadership, learning, and organizational resilience. The theorists tracing this journey—from Drucker to Edmondson—affirm that the path to “enduring greatness” lies in gentle authority, careful listening, institutionalized memory, and the humility to learn from every setback.

|
| |
| |
“I always felt that somebody was only capable of one super effort to create something that can really be consequential. There are so many impediments to being successful. If you're on the field, you're there to win, and to win requires an enormous amount of practice - pushing yourself really to the breaking point.” - Stephen Schwarzman - Blackstone Founder
Stephen A. Schwarzman is a defining figure in global finance and alternative investments. He is Chairman, CEO, and Co-Founder of Blackstone, the world’s largest alternative investment firm, overseeing over $1.2 trillion in assets.
Backstory and Context of the Quote
Stephen Schwarzman’s perspective on effort, practice, and success is rooted in over four decades building Blackstone from a two-person start-up to an institution that has shaped capital markets worldwide. The referenced quote captures his philosophy: that achieving anything truly consequential demands a singular, maximal effort—a philosophy he practised as Blackstone’s founder and architect.
Schwarzman began his career in mergers and acquisitions at Lehman Brothers in the 1970s, where he met Peter G. Peterson. Their complementary backgrounds—a combination of strategic vision and operational drive—empowered them to establish Blackstone in 1985, initially with just $400,000 in seed capital and a big ambition to build a differentiated investment firm. The mid-1980s financial environment, marked by booming M&A activity, provided fertile ground for innovation in buyouts and private markets.
From the outset, Schwarzman instilled a culture of rigorous preparation and discipline. A landmark early setback—the unsuccessful investment in Edgecomb Steel—became a pivotal learning event. It led Schwarzman to institutionalise robust investment committees, open and adversarial (yet respectful) debate, and a relentless process of due diligence. This learning loop, focused on not losing money and fact-based challenge culture, shaped Blackstone’s internal systems and risk culture for decades to come.
His attitude to practice, perseverance, and operating at the limit is not merely rhetorical—it is Blackstone’s operational model: selecting complex assets, professionalising management, and adding value through operational transformation before timing exits for maximum advantage. The company’s strict approval layers, multi-stage risk screening, and exacting standards demonstrate Schwarzman’s belief that only by pushing to the limits of endurance—and addressing every potential weakness—can lasting value be created.
In his own words, Schwarzman attributes success not to innate brilliance but to grit, repetition, and the ability to learn from failure. This is underscored by his leadership style, which evolved towards being gentle, clear, and principled, setting high standards while building an enduring culture based on integrity, decency, and open debate.
About Stephen A. Schwarzman
- Born in 1947 in Philadelphia, Schwarzman studied at Yale University (where he was a member of Skull and Bones) and earned an MBA from Harvard Business School.
- Blackstone, which he co-founded in 1985, began as an M&A boutique and now operates across private equity, real estate, credit, hedge funds, infrastructure, and life sciences, making it a recognised leader in global investment management.
- Under Schwarzman’s leadership, Blackstone institutionalised patient, active ownership—acquiring, improving, and timing the exit from portfolio companies for optimal results while actively shaping industry standards in governance and risk management.
- He is also known for his philanthropy, having signed The Giving Pledge and contributed significantly to education, arts, and culture.
- His autobiography, What It Takes: Lessons in the Pursuit of Excellence, distils the philosophy underpinning his business and personal success.
- Schwarzman’s role as a public intellectual and advisor has seen him listed among the “World’s Most Powerful People” and “Time 100 Most Influential People”.
Leading Theorists and Intellectual Currents Related to the Quote
The themes embodied in Schwarzman’s philosophy—singular effort, practice to breaking point, coping with setbacks, and building institutional culture—draw on and intersect with several influential theorists and schools of thought in management and the psychology of high achievement:
- Anders Ericsson (Deliberate Practice): Ericsson’s research underscores that deliberate practice—extended, focused effort with ongoing feedback—is critical to acquiring expert performance in any field. Schwarzman’s stress on “enormous amount of practice” parallels Ericsson’s findings that natural talent is far less important than methodical, sustained effort.
- Angela Duckworth (Grit): Duckworth’s work on “grit” emphasises passion and perseverance for long-term goals as key predictors of success. Her research supports Schwarzman’s belief that breaking through obstacles—and continuing after setbacks—is fundamental for consequential achievement.
- Carol Dweck (Growth Mindset): Dweck demonstrated that embracing a “growth mindset”—seeing failures as opportunities to learn rather than as endpoints—fosters resilience and continuous improvement. Schwarzman’s approach to institutionalising learning from failure at Blackstone reflects this theoretical foundation.
- Peter Drucker (Management by Objectives and Institutional Culture): Drucker highlighted the importance of clear organisational goals, continuous learning, and leadership by values for building enduring institutions. Schwarzman’s insistence on codifying culture, open debate, and aligning every decision with the brand reflects Drucker’s emphasis on the importance of system and culture in organisational performance.
- Jim Collins (Built to Last, Good to Great): Collins’ research into successful companies found a common thread of fanatical discipline, a culture of humility and rigorous debate, all driven by a sense of purpose. These elements are present throughout Blackstone’s governance model and leadership ethos as steered by Schwarzman.
- Michael Porter (Competitive Strategy): Porter’s concept of sustained competitive advantage through unique positioning and strategic differentiation is echoed in Blackstone’s approach—actively improving operations rather than simply relying on market exposure, and committing to ‘winning’ through operational and structural edge.
Summary
Schwarzman’s quote is not only a personal reflection but also a distillation of enduring principles in high achievement and institutional leadership. It is the lived experience of building Blackstone—a case study in dedication, resilience, and the institutionalisation of excellence. His story, and the theoretical underpinnings echoed in his approach, provide a template for excellence and consequence in any field marked by complexity, competition, and the need for sustained, high-conviction effort.

|
| |
| |
“If you upgrade that assistant to see video at 1 FPS - think Meta’s glasses... you'd need to roughly 10× the grid to accommodate that for everyone. If you upgrade the text assistant to reason at the level of models working on the ARC AGI benchmark... even just the text assistant would require around a 10× of today’s grid.” - Trevor McCourt - Extropic CTO
The quoted remark by Trevor McCourt, CTO of Extropic, underscores a crucial bottleneck in artificial intelligence scaling: energy consumption outpaces technological progress in compute efficiency, threatening the viability of universal, always-on AI. The quote translates hard technical extrapolation into plain language—projecting that if every person were to have a vision-capable assistant running at just 1 video frame per second, or if text models achieved a level of reasoning comparable to ARC AGI benchmarks, global energy infrastructure would need to multiply several times over, amounting to many terawatts—figures that quickly reach into economic and physical absurdity.
Backstory and Context of the Quote & Trevor McCourt
Trevor McCourt is the co-founder and Chief Technology Officer of Extropic, a pioneering company targeting the energy barrier limiting mass-market AI deployment. With multidisciplinary roots—a blend of mechanical engineering and quantum programming, honed at the University of Waterloo and Massachusetts Institute of Technology—McCourt contributed to projects at Google before moving to the hardware-software frontier. His leadership at Extropic is defined by a willingness to challenge orthodoxy and champion a first-principles, physics-driven approach to AI compute architecture.
The quote arises from a keynote on how present-day large language models and diffusion AI models are fundamentally energy-bound. McCourt’s analysis is rooted in practical engineering, economic realism, and deep technical awareness: the computational demands of state-of-the-art assistants vastly outstrip what today’s grid can provide if deployed at population scale. This is not merely an engineering or machine learning problem, but a macroeconomic and geopolitical dilemma.
Extropic proposes to address this impasse with Thermodynamic Sampling Units (TSUs)—a new silicon compute primitive designed to natively perform probabilistic inference, consuming orders of magnitude less power than GPU-based digital logic. Here, McCourt follows the direction set by energy-based probabilistic models and advances it both in hardware and algorithm.
McCourt’s career has been defined by innovation at the technical edge: microservices in cloud environments, patented improvements to dynamic caching in distributed systems, and research in scalable backend infrastructure. This breadth, from academic research to commercial deployment, enables his holistic critique of the GPU-centred AI paradigm, as well as his leadership at Extropic’s deep technology startup.
Leading Theorists & Influencers in the Subject
Several waves of theory and practice converge in McCourt’s and Extropic’s work:
1. Geoffrey Hinton (Energy-Based and Probabilistic Models):
Long before deep learning’s mainstream embrace, Hinton’s foundational work on Boltzmann machines and energy-based models explored the idea of learning and inference as sampling from complex probability distributions. These early probabilistic paradigms anticipated both the difficulties of scaling and the algorithmic challenges that underlie today’s generative models. Hinton’s recognition—including the Nobel Prize for work on energy-based models—cements his stature as a theorist whose footprints underpin Extropic’s approach.
2. Michael Frank (Reversible Computing)
Frank is a prominent physicist in reversible and adiabatic computing, having led major advances at MIT, Sandia National Laboratories, and others. His research investigates how the physics of computation can reduce the fundamental energy cost—directly relevant to Extropic’s mission. Frank’s focus on low-energy information processing provides a conceptual environment for approaches like TSUs to flourish.
3. Chris Bishop & Yoshua Bengio (Probabilistic Machine Learning):
Leaders like Bishop and Bengio have shaped the field’s probabilistic foundations, advocating both for deep generative models and for the practical co-design of hardware and algorithms. Their research has stressed the need to reconcile statistical efficiency with computational tractability—a tension at the core of Extropic’s narrative.
4. Alan Turing & John von Neumann (Foundations of Computing):
While not direct contributors to modern machine learning, the legacies of Turing and von Neumann persist in every conversation about alternative architectures and the physical limits of computation. The post-von Neumann and post-Turing trajectory, with a return to analogue, stochastic, or sampling-based circuitry, is directly echoed in Extropic’s work.
5. Recent Industry Visionaries (e.g., Sam Altman, Jensen Huang):
Contemporary leaders in the AI infrastructure space—such as Altman of OpenAI and Huang of Nvidia—have articulated the scale required for AGI and the daunting reality of terawatt-scale compute. Their business strategies rely on the assumption that improved digital hardware will be sufficient, a view McCourt contests with data and physical models.
Strategic & Scientific Context for the Field
- Core problem: The energy that powers AI is reaching non-linear scaling—mass-market AI could consume a significant fraction or even multiples of the entire global grid if naively scaled with today’s architectures.
- Physics bottlenecks: Improvements in digital logic are limited by physical constants: capacitance, voltage, and the energy required for irreversible computation. Digital logic has plateaued at the 10nm node.
- Algorithmic evolution: Traditional deep learning is rooted in deterministic matrix computations, but the true statistical nature of intelligence calls for sampling from complex distributions—as foregrounded in Hinton’s work and now implemented in Extropic’s TSUs.
- Paradigm shift: McCourt and contemporaries argue for a transition to native hardware–software co-design where the core computational primitive is no longer the multiply–accumulate (MAC) operation, but energy-efficient probabilistic sampling.
Summary Insight
Trevor McCourt anchors his cautionary prognosis for AI’s future on rigorous cross-disciplinary insights—from physical hardware limits to probabilistic learning theory. By combining his own engineering prowess with the legacy of foundational theorists and contemporary thinkers, McCourt’s perspective is not simply one of warning but also one of opportunity: a new generation of probabilistic, thermodynamically-inspired computers could rewrite the energy economics of artificial intelligence, making “AI for everyone” plausible—without grid-scale insanity.

|
| |
| |
“The idea that chips and ontology is what you want to short is batsh*t crazy.” - Alex Karp -Palantir CEO
Alex Karp, co-founder and CEO of Palantir Technologies, delivered the now widely-circulated statement, “The idea that chips and ontology is what you want to short is batsh*t crazy,” in response to famed investor Michael Burry’s high-profile short positions against both Palantir and Nvidia. This sharp retort came at a time when Palantir, an enterprise software and artificial intelligence (AI) powerhouse, had just reported record earnings and was under intense media scrutiny for its meteoric stock rise and valuation.
Context of the Quote
The remark was made in early November 2025 during a CNBC interview, following public disclosures that Michael Burry—of “The Big Short” fame—had taken massive short positions in Palantir and Nvidia, two companies at the heart of the AI revolution. Burry’s move, reminiscent of his contrarian bets during the 2008 financial crisis, was interpreted by the market as both a challenge to the soaring “AI trade” and a critique of the underlying economics fueling the sector’s explosive growth.
Karp’s frustration was palpable: not only was Palantir producing what he described as "anomalous" financial results—outpacing virtually all competitors in growth, cash flow, and customer retention—but it was also emerging as the backbone of data-driven operations across government and industry. For Karp, Burry’s short bet went beyond traditional market scepticism; it targeted firms, products (“chips” and “ontology”—the foundational hardware for AI and the architecture for structuring knowledge), and business models proven to be both technically indispensable and commercially robust. Karp’s rejection of the “short chips and ontology” thesis underscores his belief in the enduring centrality of the technologies underpinning the modern AI stack.
Backstory and Profile: Alex Karp
Alex Karp stands out as one of Silicon Valley’s true iconoclasts:
- Background and Education: Born in New York City in 1967, Karp holds a philosophy degree from Haverford College, a JD from Stanford, and a PhD in social theory from Goethe University Frankfurt, where he studied under and wrote about the influential philosopher Jürgen Habermas. This rare academic pedigree—blending law, philosophy, and critical theory—deeply informs both his contrarian mindset and his focus on the societal impact of technology.
- Professional Arc: Before founding Palantir in 2004 with Peter Thiel and others, Karp had forged a career in finance, running the London-based Caedmon Group. At Palantir, he crafted a unique culture and business model, combining a wellness-oriented, sometimes spiritual corporate environment with the hard-nosed delivery of mission-critical systems for Western security, defence, and industry.
- Leadership and Philosophy: Karp is known for his outspoken, unconventional leadership. Unafraid to challenge both Silicon Valley’s libertarian ethos and what he views as the groupthink of academic and financial “expert” classes, he publicly identifies as progressive—yet separates himself from establishment politics, remaining both a supporter of the US military and a critic of mainstream left and right ideologies. His style is at once brash and philosophical, combining deep skepticism of market orthodoxy with a strong belief in the capacity of technology to deliver real-world, not just notional, value.
- Palantir’s Rise: Under Karp, Palantir grew from a niche contractor to one of the world’s most important data analytics and AI companies. Palantir’s products are deeply embedded in national security, commercial analytics, and industrial operations, making the company essential infrastructure in the rapidly evolving AI economy.
Theoretical Background: ‘Chips’ and ‘Ontology’
Karp’s phrase pairs two of the foundational concepts in modern AI and data-driven enterprise:
- Chips: Here, “chips” refers specifically to advanced semiconductors (such as Nvidia’s GPUs) that provide the computational horsepower essential for training and deploying cutting-edge machine learning models. The AI revolution is inseparable from advances in chip design, leading to historic demand for high-performance hardware.
- Ontology: In computer and information science, “ontology” describes the formal structuring and categorising of knowledge—making data comprehensible, searchable, and actionable by algorithms. Robust ontologies enable organisations to unify disparate data sources, automate analytical reasoning, and achieve the “second order” efficiencies of AI at scale.
Leading theorists in the domain of ontology and AI include:
- John McCarthy: A founder of artificial intelligence, McCarthy’s foundational work on formal logic and semantics laid groundwork for modern ontological structures in AI.
- Tim Berners-Lee: Creator of the World Wide Web, Berners-Lee developed the Semantic Web, championing knowledge structuring via ontologies—enabling data to be machine-readable and all but indispensable for AI’s next leap.
- Thomas Gruber: Known for his widely cited definition of ontology in AI as “a specification of a conceptualisation,” Gruber’s research shaped the field’s approach to standardising knowledge representations for complex applications.
In the chip space, the pioneering work of:
- Jensen Huang: CEO and co-founder of Nvidia, drove the company’s transformation from graphics to AI acceleration, cementing the centrality of chips as the hardware substrate for everything from generative AI to advanced analytics.
- Gordon Moore and Robert Noyce: Their early explorations in semiconductor fabrication set the stage for the exponential hardware progress that enabled the modern AI era.
Insightful Context for the Modern Market Debate
The “chips and ontology” remark reflects a deep divide in contemporary technology investing:
- On one side, sceptics like Burry see signs of speculative excess, reminiscent of prior bubbles, and bet against companies with high valuations—even when those companies dominate core technologies fundamental to AI.
- On the other, leaders like Karp argue that while the broad “AI trade” risks pockets of overvaluation, the engine—the computational hardware (chips) and data-structuring logic (ontology)—are not just durable, but irreplaceable in the digital economy.
With Palantir and Nvidia at the centre of the current AI-driven transformation, Karp’s comment captures not just a rebuttal to market short-termism, but a broader endorsement of the foundational technologies that define the coming decade. The value of “chips and ontology” is, in Karp’s eyes, anchored not in market narrative but in empirical results and business necessity—a perspective rooted in a unique synthesis of philosophy, technology, and radical pragmatism.

|
| |
| |
“Generally speaking people hate change. It’s human nature. But change is super important. It’s inevitable. In fact, on my desk in my office I have a little plaque that says 'Change or die.' As a business leader, one of the perspectives you have to have is that you’ve got to constantly evolve and change.” - David Solomon - Goldman Sachs CEO
The quoted insight comes from David M. Solomon, Chief Executive Officer and Chairman of Goldman Sachs, a role he has held since 2018. It was delivered during a high-profile interview at The Economic Club of Washington, D.C., 30 October 2025, as Solomon reflected on the necessity of adaptability both personally and as a leader within a globally significant financial institution.
“We have very smart people, and we can put these [AI] tools in their hands to make them more productive... By using AI to reimagine processes, we can create operating efficiencies that give us a scaled opportunity to reinvest in growth.” - David Solomon - Goldman Sachs CEO
David Solomon, Chairman and CEO of Goldman Sachs, delivered the quoted remarks during an interview at the HKMA Global Financial Leaders’ Investment Summit on 4 November 2025, articulating Goldman’s strategic approach to integrating artificial intelligence across its global franchise. His comments reflect both personal experience and institutional direction: leveraging new technology to drive productivity, reimagine workflows, and reinvest operational gains in sustainable growth, rather than pursuing simplistic headcount reductions or technological novelty for its own sake.
Backstory and Context of the Quote
David Solomon’s statement arises from Goldman Sachs' current transformation—“Goldman Sachs 3.0”—centred on AI-driven process re-engineering. Rather than employing AI simply as a cost-cutting device, Solomon underscores its strategic role as an enabler for “very smart people” to magnify their productivity and impact. This perspective draws on his forty-year career in finance, where successive waves of technological disruption (from Lotus 1-2-3 spreadsheets to cloud computing) have consistently shifted how talent is leveraged, but have not diminished its central value.
The immediate business context is one of intense change: regulatory uncertainty in cross-border transactions, rebounding capital flows into China post-geopolitical tension, and a high backlog of M&A activity, particularly for large-cap US transactions. In this environment, efficiency gains from AI allow frontline teams to refocus on advisory, origination, and growth while adjusting operational models at a rapid pace. Solomon’s leadership style—pragmatic, unsentimental, and data-driven—favours process optimisation, open collaboration, and the breakdown of legacy silos.
About David Solomon
Background:
- Born in Hartsdale, New York, in 1962; educated at Hamilton College with a BA in political science, then entered banking.
- Career progression: Held senior roles at Irving Trust, Drexel Burnham, Bear Stearns; joined Goldman Sachs in 1999 as partner, eventually leading the Financing Group and serving as co-head of the Investment Banking Division for a decade.
- Appointed President and COO in 2017, then CEO in October 2018 and Chairman in January 2019, succeeding Lloyd Blankfein.
- Brought a reputation for transformative leadership, advocating modernisation, flattening hierarchies, and integrating technology across every aspect of the firm’s operations.
Leadership and Culture:
- Solomon is credited with pushing through “One Goldman Sachs,” breaking down internal silos and incentivising cross-disciplinary collaboration.
- He has modernised core HR and management practices: implemented real-time performance reviews, loosened dress codes, and raised compensation for programmers.
- Personal interests—such as his sideline as DJ D-Sol—underscore his willingness to defy convention and challenge the insularity of Wall Street leadership.
Institutional Impact:
- Under his stewardship, Goldman has accelerated its pivot to technology—automating trading operations, consolidating platforms, and committing substantial resources to digital transformation.
- Notably, the current “GS 3.0” agenda focuses on automating six major workflows to direct freed capacity into growth, consistent with a multi-decade productivity trend.
Leading Theorists and Intellectual Lineage of AI-Driven Productivity in Business
Solomon’s vision is shaped and echoed by several foundational theorists in economics, management science, and artificial intelligence:
1. Clayton Christensen
- Theory: Disruptive Innovation—frames how technological change transforms industries not through substitution but by enabling new business models and process efficiencies.
- Relevance: Goldman Sachs’ approach to using AI to reimagine workflows and create new capabilities closely mirrors Christensen’s insights on sustaining versus disruptive innovation.
2. Erik Brynjolfsson & Andrew McAfee
- Theory: Race Against the Machine, The Second Machine Age—chronicled how digital automation augments human productivity and reconfigures the labour market, not just replacing jobs but reshaping roles and enhancing output.
- Relevance: Solomon’s argument for enabling smart people with better tools directly draws on Brynjolfsson’s proposition that the best organisational outcomes occur when firms successfully combine human and machine intelligence.
3. Michael Porter
- Theory: Competitive Advantage—emphasised how operational efficiency and information advantage underpin sustained industry leadership.
- Relevance: Porter’s ideas connect to Goldman’s agenda by showing that AI integration is not just about cost, but about improving information processing, strategic agility, and client service.
4. Herbert Simon
- Theory: Bounded Rationality and Decision Support Systems—pioneered the concept that decision-making can be dramatically improved by systems that extend the cognitive capabilities of professionals.
- Relevance: Solomon’s claim that AI puts better tools in the hands of talented staff traces its lineage to Simon’s vision of computers as skilled assistants, vital to complex modern organisations.
5. Geoffrey Hinton, Yann LeCun, Yoshua Bengio
- Theory: Deep Learning—established the contemporary AI revolution underpinning business process automation, language models, and data analysis at enterprise scale.
- Relevance: Without the breakthroughs made by these theorists, AI’s current generation—capable of augmenting financial analysis, risk modelling, and operational management—could not be applied as Solomon describes.
Synthesis and Strategic Implications
Solomon’s quote epitomises the intersection of pragmatic executive leadership and theoretical insight. His advocacy for AI-integrated productivity reinforces a management consensus: sustainable competitive advantage hinges not just on technology, but on empowering skilled individuals to unlock new modes of value creation. This approach is echoed by leading researchers who situate automation as a catalyst for role evolution, scalable efficiency, and the ability to redeploy resources into higher-value growth opportunities.
Goldman Sachs’ specific AI play is therefore neither a defensive move against headcount nor a speculative technological bet, but a calculated strategy rooted in both practical business history and contemporary academic theory—a paradigm for how large organisations can adapt, thrive, and lead in the face of continual disruption.

|
| |
| |
“At scale, nothing is a commodity. We have to have our cost structure, supply-chain efficiency, and software efficiencies continue to compound to ensure margins. Scale - and one of the things I love about the OpenAI partnership - is it’s gotten us to scale. This is a scale game.” - Satya Nadella - Microsoft CEO
Satya Nadella has been at the helm of Microsoft since 2014, overseeing its transformation into one of the world’s most valuable technology companies. Born in Hyderabad, India, and educated in electrical engineering and computer science, Nadella joined Microsoft in 1992, quickly rising through the ranks in technical and business leadership roles. Prior to becoming CEO, he was best known for driving the rapid growth of Microsoft Azure, the company’s cloud infrastructure platform—a business now central to Microsoft’s global strategy.
Nadella’s leadership style is marked by systemic change—he has shifted Microsoft away from legacy, siloed software businesses and repositioned it as a cloud-first, AI-driven, and highly collaborative tech company. He is recognised for his ability to anticipate secular shifts—most notably, the move to hyperscale cloud computing and, more recently, the integration of advanced AI into core products such as GitHub Copilot and Microsoft 365 Copilot. His background—combining deep technical expertise with rigorous business training (MBA, University of Chicago)—enables him to bridge both the strategic and operational dimensions of global technology.
This quote was delivered in the context of Nadella’s public discussion on the scale economics of AI, hyperscale cloud, and the transformative partnership between Microsoft and OpenAI (the company behind ChatGPT, Sora, and GPT-4/5/6) on the BG2 podcast, 1st November 2025 In this conversation, Nadella outlines why, at the extreme end of global tech infrastructure, nothing remains a “commodity”: system costs, supply chain and manufacturing agility, and relentless software optimisation all become decisive sources of competitive advantage. He argues that scale—meaning not just size, but the compounding organisational learning and cost improvement unlocked by operating at frontier levels—determines who captures sustainable margins and market leadership.
The OpenAI partnership is, from Nadella’s perspective, a practical illustration of this thesis. By integrating OpenAI’s frontier models deeply (and at exclusive scale) within Azure, Microsoft has driven exponential increases in compute utilisation, data flows, and the learning rate of its software infrastructure. This allowed Microsoft to amortise fixed investments, rapidly reduce unit costs, and create a loop of innovation not accessible to smaller or less integrated competitors. In Nadella’s framing, scale is not a static achievement, but a perpetual game—one where the winners are those who compound advantages across the entire stack: from chip supply chains through to application software and business model design.
Theoretical Foundations and Key Thinkers
The quote’s themes intersect with multiple domains: economics of platforms, organisational learning, network effects, and innovation theory. Key theoretical underpinnings and thinkers include:
Scale Economics and Competitive Advantage
- Alfred Chandler (1918–2007): Chandler’s work on the “visible hand” and the scale and scope of modern industrial firms remains foundational. He showed how scale, when coupled with managerial coordination, allows firms to achieve durable cost advantages and vertical integration.
- Bruce Greenwald & Judd Kahn: In Competition Demystified (2005), they argue sustainable competitive advantage stems from barriers to entry—often reinforced by scale, especially via learning curves, supply chains, and distribution.
Network Effects and Platform Strategy
- Jean Tirole & Marcel Boyer: Tirole’s work on platform economics shows how scale-dependent markets (like cloud and AI) naturally concentrate—network effects reinforce the value of leading platforms, and marginal cost advantage compounds alongside user and data scale.
- Geoffrey Parker, Marshall Van Alstyne, Sangeet Paul Choudary: In their research and Platform Revolution, these thinkers elaborate how the value in digital markets accrues disproportionately to platforms that achieve scale—because transaction flows, learning, and innovation all reinforce one another.
Learning Curves and Experience Effects
- The Boston Consulting Group (BCG): In the 1960s, Bruce Henderson’s concept of the “experience curve” formalised the insight that unit costs fall as cumulative output grows—the canonical explanation for why scale delivers persistent cost advantage.
- Clayton Christensen: In The Innovator’s Dilemma, Christensen illustrates how technological discontinuities and learning rates enable new entrants to upend incumbent advantage—unless those incumbents achieve scale in the new paradigm.
Supply Chain and Operations
- Taiichi Ohno and Shoichiro Toyoda (Toyota Production System): The industrial logic that relentless supply chain optimisation and compounding process improvements, rather than static cost reduction, underpin long-run advantage, especially during periods of rapid demand growth or supply constraint.
Economics of Cloud and AI
- Hal Varian (Google, UC Berkeley): Varian’s analyses of cloud economics demonstrate the massive fixed-cost base and “public utility” logic of hyperscalers. He has argued that AI and cloud converge when scale enables learning (data/usage) to drive further cost and performance improvements.
- Andrew Ng, Yann LeCun, Geoffrey Hinton: Pioneer practitioners in deep learning and large language models, whose work established the “scaling laws” now driving the AI infrastructure buildout—i.e., that model capability increases monotonically with scale of data, compute, and parameter count.
Why This Matters Now
Organisations at the digital frontier—notably Microsoft and OpenAI—are now locked in a scale game that is reshaping both industry structure and the global economy. The cost, complexity, and learning rate needed to operate at hyperscale mean that “commodities” (compute, storage, even software itself) cease to be generic. Instead, they become deeply differentiated by embedded knowledge, utilisation efficiency, supply-chain integration, and the ability to orchestrate investments across cycles of innovation.
Nadella’s observation underscores a reality that now applies well beyond technology: the compounding of competitive advantage at scale has become the critical determinant of sector leadership and value capture. This logic is transforming industries as diverse as finance, logistics, pharmaceuticals, and manufacturing—where the ability to build, learn, and optimise at scale fundamentally redefines what was once considered “commodity” business.
In summary: Satya Nadella’s words reflect not only Microsoft’s strategy but a broader economic and technological transformation, deeply rooted in the theory and practice of scale, network effects, and organisational learning. Theorists and practitioners—from Chandler and BCG to Christensen and Varian—have analysed these effects for decades, but the age of AI and cloud has made their insights more decisive than ever. At the heart of it: scale—properly understood and operationalised—remains the ultimate competitive lever.

|
| |
| |
“Generally speaking people hate change. It’s human nature. But change is super important. It’s inevitable. In fact, on my desk in my office I have a little plaque that says 'Change or die.' As a business leader, one of the perspectives you have to have is that you’ve got to constantly evolve and change.” - David Solomon - Goldman Sachs CEO
The quoted insight comes from David M. Solomon, Chief Executive Officer and Chairman of Goldman Sachs, a role he has held since 2018. It was delivered during a high-profile interview at The Economic Club of Washington, D.C., 30 October 2025, as Solomon reflected on the necessity of adaptability both personally and as a leader within a globally significant financial institution.
His statement is emblematic of the strategic philosophy that has defined Solomon’s executive tenure. He uses the ‘Change or die’ principle to highlight the existential imperative for renewal in business, particularly in the context of technological transformation, competitive dynamics, and economic disruption.
Solomon’s leadership at Goldman Sachs has been characterised by deliberate modernisation. He has overseen the integration of advanced technology, notably in artificial intelligence and fintech, implemented culture and process reforms, adapted workforce practices, and expanded strategic initiatives in sustainable finance. His approach blends operational rigour with entrepreneurial responsiveness – a mindset shaped both by his formative years in high-yield credit markets at Drexel Burnham and Bear Stearns, and by his rise through leadership roles at Goldman Sachs.
His remark on change was prompted by questions of business resilience and the need for constant adaptation amidst macroeconomic uncertainty, regulatory flux, and the competitive imperatives of Wall Street. For Solomon, resisting change is an instinct, but enabling it is a necessity for long-term health and relevance — especially for institutions in rapidly converging markets.
About David M. Solomon
- Born 1962, Hartsdale, New York.
- Hamilton College graduate (BA Political Science).
- Early career: Irving Trust, Drexel Burnham, Bear Stearns.
- Joined Goldman Sachs as a partner in 1999, advancing through financing and investment banking leadership.
- CEO from October 2018, Chairman from January 2019.
- Known for a modernisation agenda, openness to innovation and talent, commitment to client service and culture reform.
- Outside finance: Philanthropy, board service, and a second career as electronic dance music DJ “DJ D-Sol”, underscoring a multifaceted approach to leadership and personal renewal.
Theoretical Backstory: Leading Thinkers on Change and Organisational Adaptation
Solomon’s philosophy echoes decades of foundational theory in business strategy and organisational behaviour:
Charles Darwin (1809–1882) While not a business theorist, Darwin’s principle of “survival of the fittest” is often cited in strategic literature to emphasise the adaptive imperative — those best equipped to change, survive.
Peter Drucker (1909–2005) Drucker, regarded as the father of modern management, wrote extensively on innovation, entrepreneurial management and the need for “planned abandonment.” He argued, “The greatest danger in times of turbulence is not the turbulence; it is to act with yesterday’s logic.” Drucker’s legacy forms a pillar of contemporary change management, advising leaders not only to anticipate change but to institutionalise it.
John Kotter (b. 1947) Kotter’s model for Leading Change remains a classic in change management. His eight-step framework starts with establishing a sense of urgency and is grounded in the idea that successful transformation is both necessary and achievable only with decisive leadership, clear vision, and broad engagement. Kotter demonstrated that people’s resistance to change is natural, but can be overcome through structured actions and emotionally resonant leadership.
Clayton Christensen (1952-2020) Christensen’s work on disruptive innovation clarified how incumbents often fail by ignoring, dismissing, or underinvesting in change — even when it is inevitable. His concept of the “Innovator’s Dilemma” remains seminal, showing that leaders must embrace change not as an abstract imperative but as a strategic necessity, lest they be replaced or rendered obsolete.
Rosabeth Moss Kanter Kanter’s work focuses on the human dynamics of change, the importance of culture, empowerment, and the “innovation habit” in organisations. She holds that the secret to business success is “constant, relentless innovation” and that resistance to change is deeply psychological, calling for leaders to engineer positive environments for innovation.
Integration: The Leadership Challenge
Solomon’s ethos channels these frameworks into practical executive guidance. For business leaders, particularly in financial services and Fortune 500 firms, the lesson is clear: inertia is lethal; organisational health depends on reimagining processes, culture, and client engagement for tomorrow’s challenges. The psychological aversion to change must be managed actively at all levels — from the boardroom to the front line.
In summary, the context of Solomon’s quote reflects not only a personal credo but also the consensus of generations of theoretical and practical leadership: only those prepared to “change or die” can expect to thrive and endure in an era defined by speed, disruption, and relentless unpredictability.

|
| |
| |
“[With AI] we're not building animals. We're building ghosts or spirits.” - Andrej Karpathy - Ex-OpenAI, Ex-Tesla AI
Andrej Karpathy, renowned for his leadership roles at OpenAI and Tesla’s Autopilot programme, has been at the centre of advances in deep learning, neural networks, and applied artificial intelligence. His work traverses both academic research and industrial deployment, granting him a panoramic perspective on the state and direction of AI.
When Karpathy refers to building “ghosts or spirits,” he is drawing a conceptual line between biological intelligence—the product of millions of years of evolution—and artificial intelligence as developed through data-driven, digital systems. In his view, animals are “baked in” with instincts, embodiment, and innate learning capacities shaped by evolution, a process unfolding over geological timeframes. By contrast, today’s AI models are “ghosts” in the sense that they are ethereal, fully digital artefacts, trained to imitate human-generated data rather than to evolve or learn through direct interaction with the physical world. They lack bodily instincts and the evolutionary substrate that endows animals with survival strategies and adaptation mechanisms.
Karpathy describes the pre-training process that underpins large language models as a form of “crappy evolution”—a shortcut that builds digital entities by absorbing the statistical patterns of internet-scale data without the iterative adaptation of embodied beings. Consequently, these models are not “born” into the world like animals with built-in survival machinery; instead, they are bootstrapped as “ghosts,” imitating but not experiencing life.
The Cognitive Core—Karpathy’s Vision for AI Intelligence
Karpathy’s thinking has advanced towards the critical notion of the “cognitive core”: the kernel of intelligence responsible for reasoning, abstraction, and problem-solving, abstracted away from encyclopaedic factual knowledge. He argues that the true magic of intelligence is not in the passive recall of data, but in the flexible, generalisable ability to manipulate ideas, solve problems, and intuit patterns—capabilities that a system exhibits even when deprived of pre-programmed facts or exhaustive memory.
He warns against confusing memorisation (the stockpiling of internet facts within a model) with general intelligence, which arises from this cognitive core. The most promising path, in his view, is to isolate and refine this core, stripping away the accretions of memorised data, thereby developing something akin to a “ghost” of reasoning and abstraction rather than an “animal” shaped by instinct and inheritance.
This approach entails significant trade-offs: a cognitive core lacks the encyclopaedic reach of today’s massive models, but gains in adaptability, transparency, and the capacity for compositional, creative thought. By foregrounding reasoning machinery, Karpathy posits that AI can begin to mirror not the inflexibility of animals, but the open-ended, reflective qualities that characterise high-level problem-solving.
Karpathy’s Journey and Influence
Karpathy’s influence is rooted in a career spent on the frontier of AI research and deployment. His early proximity to Geoffrey Hinton at the University of Toronto placed him at the launch-point of the convolutional neural networks revolution, which fundamentally reshaped computer vision and pattern recognition.
At OpenAI, Karpathy contributed to an early focus on training agents to master digital environments (such as Atari games), a direction in retrospect he now considers premature. He found greater promise in systems that could interact with the digital world through knowledge work—precursors to today’s agentic models—a vision he is now helping to realise through ongoing work in educational technology and AI deployment.
Later, at Tesla, he directed the transformation of autonomous vehicles from demonstration to product, gaining hard-won appreciation for the “march of nines”—the reality that progressing from system prototypes that work 90% of the time to those that work 99.999% of the time requires exponentially more effort. This experience informs his scepticism towards aggressive timelines for “AGI” and his insistence on the qualitative differences between robust system deployment and controlled demonstrations.
The Leading Theorists Shaping the Debate
Karpathy’s conceptual framework emerges amid vibrant discourse within the AI community, shaped by several seminal thinkers:
Sutton’s “bitter lesson” posits that scale and generic algorithms, rather than domain-specific tricks, ultimately win—suggesting a focus on evolving animal-like intelligence. Karpathy, however, notes that current development practices, with their reliance on dataset imitation, sidestep the deep embodiment and evolutionary learning that define animal cognition. Instead, AI today creates digital ghosts—entities whose minds are not grounded in physical reality, but in the manifold of internet text and data.
Hinton and LeCun supply the neural and architectural foundations—the “cortex” and reasoning traces—while both Karpathy and their critics note the absence of rich, consolidated memory (the hippocampus analogue), instincts (amygdala), and the capacity for continual, self-motivated world interaction.
Why “Ghosts,” Not “Animals”?
The distinction is not simply philosophical. It carries direct consequences for:
- Capabilities: AI “ghosts” excel at pattern reproduction, simulation, and surface reasoning but lack the embodied, instinctual grounding (spatial navigation, sensorimotor learning) of animals.
- Limitations: They are subject to model collapse, producing uniform, repetitive outputs, lacking the spontaneous creativity and entropy seen in human (particularly child) cognition.
- Future Directions: The field is now oriented towards distilling this cognitive core, seeking a scalable, adaptable reasoning engine—compact, efficient, and resilient to overfitting—rather than continuing to bloat models with ever more static memory.
This lens sharpens expectations: the way forward is not to mimic biology in its totality, but to pursue the unique strengths and affordances of a digital, disembodied intelligence—a spirit of the datasphere, not a beast evolved in the forest.
Broader Significance
Karpathy’s “ghosts” metaphor crystallises a critical moment in the evolution of AI as a discipline. It signals a turning point: the shift from brute-force memorisation of the internet to intelligent, creative algorithms capable of abstraction, reasoning, and adaptation.
This reframing is shaping not only the strategic priorities of the most advanced labs, but also the philosophical and practical questions underpinning the next decade of AI research and deployment. As AI becomes increasingly present in society, understanding its nature—not as an artificial animal, but as a digital ghost—will be essential to harnessing its strengths and mitigating its limitations.

|
| |
|