Select Page

News and Tools

Breaking Business News

 

Our selection of the top business news sources on the web.

Quote: Dr. Fei-Fei Li – Stanford Professor – world-renowned authority in artificial intelligence

Quote: Dr. Fei-Fei Li – Stanford Professor – world-renowned authority in artificial intelligence

“That ability that humans have, it’s the combination of creativity and abstraction. I do not see today’s AI or tomorrow’s AI being able to do that yet.” – Dr. Fei-Fei Li – Stanford Professor – world-renowned authority in artificial intelligence

Dr. Li’s statement came amid wide speculation about the near-term prospects for artificial general intelligence (AGI) and superintelligence. While current AI already exceeds human capacity in specific domains (such as language translation, memory recall, and vast-scale data analysis), Dr. Li draws a line at creative abstraction—the human ability to form new concepts and theories that radically change our understanding of the world. She underscores that, despite immense data and computational resources, AI does not demonstrate the generative leap that allowed Newton to discover classical mechanics or Einstein to reshape physics with relativity. Dr. Li insists that, absent fundamental conceptual breakthroughs, neither today’s nor tomorrow’s AI can replicate this synthesis of creativity and abstract reasoning.

About Dr. Fei-Fei Li

Dr. Fei-Fei Li holds the title of Sequoia Capital Professor of Computer Science at Stanford University and is a world-renowned authority in artificial intelligence, particularly in computer vision and human-centric AI. She is best known for creating ImageNet, the dataset that triggered the deep learning revolution in computer vision—a cornerstone of modern AI systems. As the founding co-director of Stanford’s Institute for Human-Centered Artificial Intelligence (HAI), Dr. Li has consistently championed the need for AI that advances, rather than diminishes, human dignity and agency. Her research, with over 400 scientific publications, has pioneered new frontiers in machine learning, neuroscience, and their intersection.

Her leadership extends beyond academia: she served as chief scientist of AI/ML at Google Cloud, sits on international boards, and is deeply engaged in policy, notably as a special adviser to the UN. Dr. Li is acclaimed for her advocacy in AI ethics and diversity, notably co-founding AI4ALL, a non-profit enabling broader participation in the AI field. Often described as the “godmother of AI,” she is an elected member of the US National Academy of Engineering and the National Academy of Medicine. Her personal journey—from emigrating from Chengdu, China, to supporting her parents’ small business in New Jersey, to her trailblazing career—is detailed in her acclaimed 2023 memoir, The Worlds I See.

Remarks on Creativity, Abstraction, and AI: Theoretical Roots

The distinction Li draws—between algorithmic pattern-matching and genuine creative abstraction—addresses a foundational question in AI: What constitutes intelligence, and is it replicable in machines? This theme resonates through the works of several canonical theorists:

  • Alan Turing (1912–1954): Regarded as the father of computer science, Turing posed the question of machine intelligence in his pivotal 1950 paper, “Computing Machinery and Intelligence”. He proposed what we call the Turing Test: if a machine could converse indistinguishably from a human, could it be deemed intelligent? Turing hinted at the limits but also the theoretical possibility of machine abstraction.
  • Herbert Simon and Allen Newell: Pioneers of early “symbolic AI”, Simon and Newell framed intelligence as symbol manipulation; their experiments (the Logic Theorist and General Problem Solver) made some progress in abstract reasoning but found creative leaps elusive.
  • Marvin Minsky (1927–2016): Co-founder of the MIT AI Lab, Minsky believed creativity could in principle be mechanised, but anticipated it would require complex architectures that integrate many types of knowledge. His work, especially The Society of Mind, remained vital but speculative.
  • John McCarthy (1927–2011): While he named the field “artificial intelligence” and developed the LISP programming language, McCarthy was cautious about claims of broad machine creativity, viewing abstraction as an open challenge.
  • Geoffrey Hinton, Yann LeCun, Yoshua Bengio: Fathers of deep learning, these researchers demonstrated that neural networks can match or surpass humans in perception and narrow problem-solving but have themselves highlighted the gap between statistical learning and the ingenuity seen in human discovery.
  • Nick Bostrom: In Superintelligence (2014), Bostrom analysed risks and trajectories for machine intelligence exceeding humans, but acknowledged that qualitative leaps in creativity—paradigm shifts, theory building—remain a core uncertainty.
  • Gary Marcus: An outspoken critic of current AI, Marcus argues that without genuine causal reasoning and abstract knowledge, current models (including the most advanced deep learning systems) are far from truly creative intelligence.

Synthesis and Current Debates

Across these traditions, a consistent theme emerges: while AI has achieved superhuman accuracy, speed, and recall in structured domains, genuine creativity—the ability to abstract from prior knowledge to new paradigms—is still uniquely human. Dr. Fei-Fei Li, by foregrounding this distinction, not only situates herself within this lineage but also aligns her ongoing research on “large world models” with an explicit goal: to design AI tools that augment—but do not seek to supplant—human creative reasoning and abstract thought.

Her caution, rooted in both technical expertise and a broader philosophical perspective, stands as a rare check on techno-optimism. It articulates the stakes: as machine intelligence accelerates, the need to centre human capabilities, dignity, and judgement—especially in creativity and abstraction—becomes not just prudent but essential for responsibly shaping our shared future.

read more
Quote: Dr Eric Schmidt – Ex-Google CEO

Quote: Dr Eric Schmidt – Ex-Google CEO

“I worry a lot about … Africa. And the reason is: how does Africa benefit from [AI]? There’s obviously some benefit of globalisation, better crop yields, and so forth. But without stable governments, strong universities, major industrial structures – which Africa, with some exceptions, lacks – it’s going to lag.” – Dr Eric Schmidt – Former Google CEO

Dr Eric Schmidt’s observation stems from his experience at the highest levels of the global technology sector and his acute awareness of both the promise and the precariousness of the coming AI age. His warning about Africa’s risk of lagging in AI adoption and benefit is rooted in today’s uneven technological landscape and long-standing structural challenges facing the continent.

About Dr Eric Schmidt

Dr Eric Schmidt is one of the most influential technology executives of the 21st century. As CEO of Google from 2001 to 2011, he oversaw Google’s transformation from a Silicon Valley start-up into a global technology leader. Schmidt provided the managerial and strategic backbone that enabled Google’s explosive growth, product diversification, and a culture of robust innovation. After Google, he continued as Executive Chairman and Technical Advisor through Google’s restructuring into Alphabet, before transitioning to philanthropic and strategic advisory work. Notably, Schmidt has played significant roles in US national technology strategy, chairing the US National Security Commission on Artificial Intelligence and founding the bipartisan Special Competitive Studies Project, which advises on the intersections of AI, security, and economic competitiveness.

With a background encompassing leading roles at Sun Microsystems, Novell, and advisory positions at Xerox PARC and Bell Labs, Schmidt’s career reflects deep immersion in technology and innovation. He is widely regarded as a strategic thinker on the global opportunities and risks of technology, regularly offering perspective on how AI, digital infrastructure, and national competitiveness are shaping the future economic order.

Context of the Quotation

Schmidt’s remark appeared during a high-level panel at the Future Investment Initiative (FII9), in conversation with Dr Fei-Fei Li of Stanford and Peter Diamandis. The discussion centred on “What Happens When Digital Superintelligence Arrives?” and explored the likely economic, social, and geopolitical consequences of rapid AI advancement.

In this context, Schmidt identified a core risk: that AI’s benefits will accrue unevenly across borders, amplifying existing inequalities. He emphasised that while powerful AI tools may drive exceptional economic value and efficiencies—potentially in the trillions of dollars—these gains are concentrated by network effects, investment, and infrastructure. Schmidt singled out Africa as particularly vulnerable: absent stable governance, strong research universities, or robust industrial platforms—critical prerequisites for technology absorption—Africa faces the prospect of deepening relative underdevelopment as the AI era accelerates. The comment reflects a broader worry in technology and policy circles: global digitisation is likely to amplify rather than repair structural divides unless deliberate action is taken.

Leading Theorists and Thinking on the Subject

The dynamics Schmidt describes are at the heart of an emerging literature on the “AI divide,” digital colonialism, and the geopolitics of AI. Prominent thinkers in these debates include:

  • Professor Fei-Fei Li
    A leading AI scientist, Dr Li has consistently framed AI’s potential as contingent on human-centred design and equitable access. She highlights the distinction between the democratisation of access (e.g., cheaper healthcare or education via AI) and actual shared prosperity—which hinges on local capacity, policy, and governance. Her work underlines that technical progress does not automatically result in inclusive benefit, validating Schmidt’s concerns.
  • Kate Crawford and Timnit Gebru
    Both have written extensively on the risks of algorithmic exclusion, surveillance, and the concentration of AI expertise within a handful of countries and firms. In particular, Crawford’s Atlas of AI and Gebru’s leadership in AI ethics foreground how global AI development mirrors deeper resource and power imbalances.
  • Nick Bostrom and Stuart Russell
    Their theoretical contributions address the broader existential and ethical challenges of artificial superintelligence, but they also underscore risks of centralised AI power—technically and economically.
  • Ndubuisi Ekekwe, Bitange Ndemo, and Nanjira Sambuli
    These African thought leaders and scholars examine how Africa can leapfrog in digital adoption but caution that profound barriers—structural, institutional, and educational—must be addressed for the continent to benefit from AI at scale.
  • Eric Schmidt himself has become a touchstone in policy/tech strategy circles, having co-chaired the US National Security Commission on Artificial Intelligence. The Commission’s reports warned of a bifurcated world where AI capabilities—and thus economic and security advantages—are ever more concentrated.

Structural Elements Behind the Quote

Schmidt’s remark draws attention to a convergence of factors:

  • Institutional robustness
    Long-term AI prosperity requires stable governments, responsive regulatory environments, and a track record of supporting investment and innovation. This is lacking in many, though not all, of Africa’s economies.
  • Strong universities and research ecosystems
    AI innovation is talent- and research-intensive. Weak university networks limit both the creation and absorption of advanced technologies.
  • Industrial and technological infrastructure
    A mature industrial base enables countries and companies to adapt AI for local benefit. The absence of such infrastructure often results in passive consumption of foreign technology, forgoing participation in value creation.
  • Network effects and tech realpolitik
    Advanced AI tools, data centres, and large-scale compute power are disproportionately located in a few advanced economies. The ability to partner with these “hyperscalers”—primarily in the US—shapes national advantage. Schmidt argues that regions which fail to make strategic investments or partnerships risk being left further behind.

Summary

Schmidt’s statement is not simply a technical observation but an acute geopolitical and developmental warning. It reflects current global realities where AI’s arrival promises vast rewards, but only for those with the foundational economic, political, and intellectual capital in place. For policy makers, investors, and researchers, the implication is clear: bridging the digital-structural gap requires not only technology transfer but also building resilient, adaptive institutions and talent pipelines that are locally grounded.

read more
Quote: Trevor McCourt – Extropic CTO

Quote: Trevor McCourt – Extropic CTO

“We need something like 10 terawatts in the next 20 years to make LLM systems truly useful to everyone… Nvidia would need to 100× output… You basically need to fill Nevada with solar panels to provide 10 terawatts of power, at a cost around the world’s GDP. Totally crazy.” – Trevor McCourt – Extropic CTO

Trevor McCourt, Chief Technology Officer and co-founder of Extropic, has emerged as a leading voice articulating a paradox at the heart of artificial intelligence advancement: the technology that promises to democratise intelligence across the planet may, in fact, be fundamentally unscalable using conventional infrastructure. His observation about the terawatt imperative captures this tension with stark clarity—a reality increasingly difficult to dismiss as speculative.

Who Trevor McCourt Is

McCourt brings a rare convergence of disciplinary expertise to his role. Trained in mechanical engineering at the University of Waterloo (graduating 2015) and holding advanced credentials from the Massachusetts Institute of Technology (2020), he combines rigorous physical intuition with deep software systems architecture. Prior to co-founding Extropic, McCourt worked as a Principal Software Engineer, establishing a track record of delivering infrastructure at scale: he designed microservices-based cloud platforms that improved deployment speed by 40% whilst reducing operational costs by 30%, co-invented a patented dynamic caching algorithm for distributed systems, and led open-source initiatives that garnered over 500 GitHub contributors.

This background—spanning mechanical systems, quantum computation, backend infrastructure, and data engineering—positions McCourt uniquely to diagnose what others in the AI space have overlooked: that energy is not merely a cost line item but a binding physical constraint on AI’s future deployment model.

Extropic, which McCourt co-founded alongside Guillaume Verdon (formerly a quantum technology lead at Alphabet’s X division), closed a $14.1 million Series Seed funding round in 2023, led by Kindred Ventures and backed by institutional investors including Buckley Ventures, HOF Capital, and OSS Capital. The company now stands at approximately 15 people distributed across integrated circuit design, statistical physics research, and machine learning—a lean team assembled to pursue what McCourt characterises as a paradigm shift in compute architecture.

The Quote in Strategic Context

McCourt’s assertion that “10 terawatts in the next 20 years” is required for universal LLM deployment, coupled with his observation that this would demand filling Nevada with solar panels at a cost approaching global GDP, represents far more than rhetorical flourish. It is the product of methodical back-of-the-envelope engineering calculation.

His reasoning unfolds as follows:

From Today’s Baseline to Mass Deployment:
A text-based assistant operating at today’s reasoning capability (approximating GPT-5-Pro performance) deployed to every person globally would consume roughly 20% of the current US electrical grid—approximately 100 gigawatts. This is not theoretical; McCourt derives this from first principles: transformer models consume roughly 2 × (parameters × tokens) floating-point operations; modern accelerators like Nvidia’s H100 operate at approximately 0.7 picojoules per FLOP; population-scale deployment implies continuous, always-on inference at scale.

Adding Modalities and Reasoning:
Upgrade that assistant to include video capability at just 1 frame per second (envisioning Meta-style augmented-reality glasses worn by billions), and the grid requirement multiplies by approximately 10×. Enhance the reasoning capability to match models working on the ARC AGI benchmark—problems of human-level reasoning difficulty—and the text assistant alone requires a 10× expansion: 5 terawatts. Push further to expert-level systems capable of solving International Mathematical Olympiad problems, and the requirement reaches 100× the current grid.

Economic Impossibility:
A single gigawatt data centre costs approximately $10 billion to construct. The infrastructure required for mass-market AI deployment rapidly enters the hundreds of trillions of dollars—approaching or exceeding global GDP. Nvidia’s current manufacturing capacity would itself require a 100-fold increase to support even McCourt’s more modest scenarios.

Physical Reality Check:
Over the past 75 years, US grid capacity has grown remarkably consistently—a nearly linear expansion. Sam Altman’s public commitment to building one gigawatt of data centre capacity per week alone would require 3–5× the historical rate of grid growth. Credible plans for mass-market AI acceleration push this requirement into the terawatt range over two decades—a rate of infrastructure expansion that is not merely economically daunting but potentially physically impossible given resource constraints, construction timelines, and raw materials availability.

McCourt’s conclusion: the energy path is not simply expensive; it is economically and physically untenable. The paradigm must change.

Intellectual Foundations: Leading Theorists in Energy-Efficient Computing and Probabilistic AI

Understanding McCourt’s position requires engagement with the broader intellectual landscape that has shaped thinking about computing’s physical limits and probabilistic approaches to machine learning.

Geoffrey Hinton—Pioneering Energy-Based Models and Probabilistic Foundations:
Few figures loom larger in the theoretical background to Extropic’s work than Geoffrey Hinton. Decades before the deep learning boom, Hinton developed foundational theory around Boltzmann machines and energy-based models (EBMs)—the conceptual framework that treats learning as the discovery and inference of complex probability distributions. His work posits that machine learning, at its essence, is about fitting a probability distribution to observed data and then sampling from it to generate new instances consistent with that distribution. Hinton’s recognition with the 2023 Nobel Prize in Physics for “foundational discoveries and inventions that enable machine learning with artificial neural networks” reflects the deep prescience of this probabilistic worldview. More than theoretical elegance, this framework points toward an alternative computational paradigm: rather than spending vast resources on deterministic matrix operations (the GPU model), a system optimised for efficient sampling from complex distributions would align computation with the statistical nature of intelligence itself.

Michael Frank—Physics of Reversible and Adiabatic Computing:
Michael Frank, a senior scientist now at Vaire (a near-zero-energy chip company), has spent decades at the intersection of physics and computing. His research programme, initiated at MIT in the 1990s and continued at the University of Florida, Florida State, and Sandia National Laboratories, focuses on reversible computing and adiabatic CMOS—techniques aimed at reducing the fundamental energy cost of information processing. Frank’s work addresses a deep truth: in conventional digital logic, information erasure is thermodynamically irreversible and expensive, dissipating energy as heat. By contrast, reversible computing minimises such erasure, thereby approaching theoretical energy limits set by physics rather than by engineering convention. Whilst Frank’s trajectory and Extropic’s diverge in architectural detail, both share the conviction that energy efficiency must be rooted in physical first principles, not merely in engineering optimisation of existing paradigms.

Yoshua Bengio and Chris Bishop—Probabilistic Learning Theory:
Leading researchers in deep generative modelling—including Bengio, Bishop, and others—have consistently advocated for probabilistic frameworks as foundational to machine learning. Their work on diffusion models, variational inference, and sampling-based approaches has legitimised the view that efficient inference is not about raw compute speed but about statistical appropriateness. This theoretical lineage underpins the algorithmic choices at Extropic: energy-based models and denoising thermodynamic models are not novel inventions but rather a return to first principles, informed by decades of probabilistic ML research.

Richard Feynman—Foundational Physics of Computing:
Though less directly cited in contemporary AI discourse, Feynman’s 1982 lectures on the physics of computation remain conceptually foundational. Feynman observed that computation’s energy cost is ultimately governed by physical law, not engineering ingenuity alone. His observations on reversibility and the thermodynamic cost of irreversible operations informed the entire reversible-computing movement and, by extension, contemporary efforts to align computation with physics rather than against it.

Contemporary Systems Thinkers (Sam Altman, Jensen Huang):
Counterintuitively, McCourt’s critique is sharpened by engagement with the visionary statements of industry leaders who have perhaps underestimated energy constraints. Altman’s commitment to building one gigawatt of data centre capacity per week, and Huang’s roadmaps for continued GPU scaling, have inadvertently validated McCourt’s concern: even the most optimistic industrial plans require infrastructure expansion at rates that collide with physical reality. McCourt uses their own projections as evidence for the necessity of paradigm change.

The Broader Strategic Narrative

McCourt’s remarks must be understood within a convergence of intellectual and practical pressures:

The Efficiency Plateau:
Digital logic efficiency, measured as energy per operation, has stalled. Transistor capacitance plateaued around the 10-nanometre node; operating voltage is thermodynamically bounded near 300 millivolts. Architectural optimisations (quantisation, sparsity, tensor cores) improve throughput but do not overcome these physical barriers. The era of “free lunch” efficiency gains from Moore’s Law miniaturisation has ended.

Model Complexity Trajectory:
Whilst small models have improved at fixed benchmarks, frontier AI systems—those solving novel, difficult problems—continue to demand exponentially more compute. AlphaGo required ~1 exaFLOP per game; AlphaCode required ~100 exaFLOPs per coding problem; the system solving International Mathematical Olympiad problems required ~100,000 exaFLOPs. Model miniaturisation is not offsetting capability ambitions.

Market Economics:
The AI market has attracted trillions in capital precisely because the economic potential is genuine and vast. Yet this same vastness creates the energy paradox: truly universal AI deployment would consume resources incompatible with global infrastructure and economics. The contradiction is not marginal; it is structural.

Extropic’s Alternative:
Extropic proposes to escape this local minimum through radical architectural redesign. Thermodynamic Sampling Units (TSUs)—circuits architected as arrays of probabilistic sampling cells rather than multiply-accumulate units—would natively perform the statistical operations that diffusion and generative AI models require. Early simulations suggest energy efficiency improvements of 10,000× on simple benchmarks compared to GPU-based approaches. Hybrid algorithms combining TSUs with compact neural networks on conventional hardware could deliver intermediate gains whilst establishing a pathway toward a fundamentally different compute paradigm.

Why This Matters Now

The quote’s urgency reflects a dawning recognition across technical and policy circles that energy is not a peripheral constraint but the central bottleneck determining AI’s future trajectory. The choice, as McCourt frames it, is stark: either invest in a radically new architecture, or accept that mass-market AI remains perpetually out of reach—a luxury good confined to the wealthy and powerful rather than a technology accessible to humanity.

This is not mere speculation or provocation. It is engineering analysis grounded in physics, economics, and historical precedent, articulated by someone with the technical depth to understand both the problem and the extraordinary difficulty of solving it.

read more
Quote: Trevor McCourt – Extropic CTO

Quote: Trevor McCourt – Extropic CTO

“If you upgrade that assistant to see video at 1 FPS – think Meta’s glasses… you’d need to roughly 10× the grid to accommodate that for everyone. If you upgrade the text assistant to reason at the level of models working on the ARC AGI benchmark… even just the text assistant would require around a 10× of today’s grid.” – Trevor McCourt – Extropic CTO

The quoted remark by Trevor McCourt, CTO of Extropic, underscores a crucial bottleneck in artificial intelligence scaling: energy consumption outpaces technological progress in compute efficiency, threatening the viability of universal, always-on AI. The quote translates hard technical extrapolation into plain language—projecting that if every person were to have a vision-capable assistant running at just 1 video frame per second, or if text models achieved a level of reasoning comparable to ARC AGI benchmarks, global energy infrastructure would need to multiply several times over, amounting to many terawatts—figures that quickly reach into economic and physical absurdity.

Backstory and Context of the Quote & Trevor McCourt

Trevor McCourt is the co-founder and Chief Technology Officer of Extropic, a pioneering company targeting the energy barrier limiting mass-market AI deployment. With multidisciplinary roots—a blend of mechanical engineering and quantum programming, honed at the University of Waterloo and Massachusetts Institute of Technology—McCourt contributed to projects at Google before moving to the hardware-software frontier. His leadership at Extropic is defined by a willingness to challenge orthodoxy and champion a first-principles, physics-driven approach to AI compute architecture.

The quote arises from a keynote on how present-day large language models and diffusion AI models are fundamentally energy-bound. McCourt’s analysis is rooted in practical engineering, economic realism, and deep technical awareness: the computational demands of state-of-the-art assistants vastly outstrip what today’s grid can provide if deployed at population scale. This is not merely an engineering or machine learning problem, but a macroeconomic and geopolitical dilemma.

Extropic proposes to address this impasse with Thermodynamic Sampling Units (TSUs)—a new silicon compute primitive designed to natively perform probabilistic inference, consuming orders of magnitude less power than GPU-based digital logic. Here, McCourt follows the direction set by energy-based probabilistic models and advances it both in hardware and algorithm.

McCourt’s career has been defined by innovation at the technical edge: microservices in cloud environments, patented improvements to dynamic caching in distributed systems, and research in scalable backend infrastructure. This breadth, from academic research to commercial deployment, enables his holistic critique of the GPU-centred AI paradigm, as well as his leadership at Extropic’s deep technology startup.

Leading Theorists & Influencers in the Subject

Several waves of theory and practice converge in McCourt’s and Extropic’s work:

1. Geoffrey Hinton (Energy-Based and Probabilistic Models):
Long before deep learning’s mainstream embrace, Hinton’s foundational work on Boltzmann machines and energy-based models explored the idea of learning and inference as sampling from complex probability distributions. These early probabilistic paradigms anticipated both the difficulties of scaling and the algorithmic challenges that underlie today’s generative models. Hinton’s recognition—including the Nobel Prize for work on energy-based models—cements his stature as a theorist whose footprints underpin Extropic’s approach.

2. Michael Frank (Reversible Computing)
Frank is a prominent physicist in reversible and adiabatic computing, having led major advances at MIT, Sandia National Laboratories, and others. His research investigates how the physics of computation can reduce the fundamental energy cost—directly relevant to Extropic’s mission. Frank’s focus on low-energy information processing provides a conceptual environment for approaches like TSUs to flourish.

3. Chris Bishop & Yoshua Bengio (Probabilistic Machine Learning):
Leaders like Bishop and Bengio have shaped the field’s probabilistic foundations, advocating both for deep generative models and for the practical co-design of hardware and algorithms. Their research has stressed the need to reconcile statistical efficiency with computational tractability—a tension at the core of Extropic’s narrative.

4. Alan Turing & John von Neumann (Foundations of Computing):
While not direct contributors to modern machine learning, the legacies of Turing and von Neumann persist in every conversation about alternative architectures and the physical limits of computation. The post-von Neumann and post-Turing trajectory, with a return to analogue, stochastic, or sampling-based circuitry, is directly echoed in Extropic’s work.

5. Recent Industry Visionaries (e.g., Sam Altman, Jensen Huang):
Contemporary leaders in the AI infrastructure space—such as Altman of OpenAI and Huang of Nvidia—have articulated the scale required for AGI and the daunting reality of terawatt-scale compute. Their business strategies rely on the assumption that improved digital hardware will be sufficient, a view McCourt contests with data and physical models.

Strategic & Scientific Context for the Field

  • Core problem: The energy that powers AI is reaching non-linear scaling—mass-market AI could consume a significant fraction or even multiples of the entire global grid if naively scaled with today’s architectures.
  • Physics bottlenecks: Improvements in digital logic are limited by physical constants: capacitance, voltage, and the energy required for irreversible computation. Digital logic has plateaued at the 10nm node.
  • Algorithmic evolution: Traditional deep learning is rooted in deterministic matrix computations, but the true statistical nature of intelligence calls for sampling from complex distributions—as foregrounded in Hinton’s work and now implemented in Extropic’s TSUs.
  • Paradigm shift: McCourt and contemporaries argue for a transition to native hardware–software co-design where the core computational primitive is no longer the multiply–accumulate (MAC) operation, but energy-efficient probabilistic sampling.

Summary Insight

Trevor McCourt anchors his cautionary prognosis for AI’s future on rigorous cross-disciplinary insights—from physical hardware limits to probabilistic learning theory. By combining his own engineering prowess with the legacy of foundational theorists and contemporary thinkers, McCourt’s perspective is not simply one of warning but also one of opportunity: a new generation of probabilistic, thermodynamically-inspired computers could rewrite the energy economics of artificial intelligence, making “AI for everyone” plausible—without grid-scale insanity.

read more
Quote: Alex Karp – Palantir CEO

Quote: Alex Karp – Palantir CEO

“The idea that chips and ontology is what you want to short is batsh*t crazy.” – Alex Karp -Palantir CEO

Alex Karp, co-founder and CEO of Palantir Technologies, delivered the now widely-circulated statement, “The idea that chips and ontology is what you want to short is batsh*t crazy,” in response to famed investor Michael Burry’s high-profile short positions against both Palantir and Nvidia. This sharp retort came at a time when Palantir, an enterprise software and artificial intelligence (AI) powerhouse, had just reported record earnings and was under intense media scrutiny for its meteoric stock rise and valuation.

Context of the Quote

The remark was made in early November 2025 during a CNBC interview, following public disclosures that Michael Burry—of “The Big Short” fame—had taken massive short positions in Palantir and Nvidia, two companies at the heart of the AI revolution. Burry’s move, reminiscent of his contrarian bets during the 2008 financial crisis, was interpreted by the market as both a challenge to the soaring “AI trade” and a critique of the underlying economics fueling the sector’s explosive growth.

Karp’s frustration was palpable: not only was Palantir producing what he described as “anomalous” financial results—outpacing virtually all competitors in growth, cash flow, and customer retention—but it was also emerging as the backbone of data-driven operations across government and industry. For Karp, Burry’s short bet went beyond traditional market scepticism; it targeted firms, products (“chips” and “ontology”—the foundational hardware for AI and the architecture for structuring knowledge), and business models proven to be both technically indispensable and commercially robust. Karp’s rejection of the “short chips and ontology” thesis underscores his belief in the enduring centrality of the technologies underpinning the modern AI stack.

Backstory and Profile: Alex Karp

Alex Karp stands out as one of Silicon Valley’s true iconoclasts:

  • Background and Education: Born in New York City in 1967, Karp holds a philosophy degree from Haverford College, a JD from Stanford, and a PhD in social theory from Goethe University Frankfurt, where he studied under and wrote about the influential philosopher Jürgen Habermas. This rare academic pedigree—blending law, philosophy, and critical theory—deeply informs both his contrarian mindset and his focus on the societal impact of technology.
  • Professional Arc: Before founding Palantir in 2004 with Peter Thiel and others, Karp had forged a career in finance, running the London-based Caedmon Group. At Palantir, he crafted a unique culture and business model, combining a wellness-oriented, sometimes spiritual corporate environment with the hard-nosed delivery of mission-critical systems for Western security, defence, and industry.
  • Leadership and Philosophy: Karp is known for his outspoken, unconventional leadership. Unafraid to challenge both Silicon Valley’s libertarian ethos and what he views as the groupthink of academic and financial “expert” classes, he publicly identifies as progressive—yet separates himself from establishment politics, remaining both a supporter of the US military and a critic of mainstream left and right ideologies. His style is at once brash and philosophical, combining deep skepticism of market orthodoxy with a strong belief in the capacity of technology to deliver real-world, not just notional, value.
  • Palantir’s Rise: Under Karp, Palantir grew from a niche contractor to one of the world’s most important data analytics and AI companies. Palantir’s products are deeply embedded in national security, commercial analytics, and industrial operations, making the company essential infrastructure in the rapidly evolving AI economy.

Theoretical Background: ‘Chips’ and ‘Ontology’

Karp’s phrase pairs two of the foundational concepts in modern AI and data-driven enterprise:

  • Chips: Here, “chips” refers specifically to advanced semiconductors (such as Nvidia’s GPUs) that provide the computational horsepower essential for training and deploying cutting-edge machine learning models. The AI revolution is inseparable from advances in chip design, leading to historic demand for high-performance hardware.
  • Ontology: In computer and information science, “ontology” describes the formal structuring and categorising of knowledge—making data comprehensible, searchable, and actionable by algorithms. Robust ontologies enable organisations to unify disparate data sources, automate analytical reasoning, and achieve the “second order” efficiencies of AI at scale.

Leading theorists in the domain of ontology and AI include:

  • John McCarthy: A founder of artificial intelligence, McCarthy’s foundational work on formal logic and semantics laid groundwork for modern ontological structures in AI.
  • Tim Berners-Lee: Creator of the World Wide Web, Berners-Lee developed the Semantic Web, championing knowledge structuring via ontologies—enabling data to be machine-readable and all but indispensable for AI’s next leap.
  • Thomas Gruber: Known for his widely cited definition of ontology in AI as “a specification of a conceptualisation,” Gruber’s research shaped the field’s approach to standardising knowledge representations for complex applications.

In the chip space, the pioneering work of:

  • Jensen Huang: CEO and co-founder of Nvidia, drove the company’s transformation from graphics to AI acceleration, cementing the centrality of chips as the hardware substrate for everything from generative AI to advanced analytics.
  • Gordon Moore and Robert Noyce: Their early explorations in semiconductor fabrication set the stage for the exponential hardware progress that enabled the modern AI era.

Insightful Context for the Modern Market Debate

The “chips and ontology” remark reflects a deep divide in contemporary technology investing:

  • On one side, sceptics like Burry see signs of speculative excess, reminiscent of prior bubbles, and bet against companies with high valuations—even when those companies dominate core technologies fundamental to AI.
  • On the other, leaders like Karp argue that while the broad “AI trade” risks pockets of overvaluation, the engine—the computational hardware (chips) and data-structuring logic (ontology)—are not just durable, but irreplaceable in the digital economy.

With Palantir and Nvidia at the centre of the current AI-driven transformation, Karp’s comment captures not just a rebuttal to market short-termism, but a broader endorsement of the foundational technologies that define the coming decade. The value of “chips and ontology” is, in Karp’s eyes, anchored not in market narrative but in empirical results and business necessity—a perspective rooted in a unique synthesis of philosophy, technology, and radical pragmatism.

read more
Stay up to date with the Global Advisors Whatsapp channel!

Stay up to date with the Global Advisors Whatsapp channel!

Follow our Global Advisors Whatsapp channel to get updates and news!

https://globaladvisors.biz/whatsapp

read more
Quote: Fyodor Dostoevsky – Russian novelist, essayist and journalist

Quote: Fyodor Dostoevsky – Russian novelist, essayist and journalist

“A man who lies to himself, and believes his own lies becomes unable to recognize truth, either in himself or in anyone else, and he ends up losing respect for himself and for others. When he has no respect for anyone, he can no longer love, and, in order to divert himself, having no love in him, he yields to his impulses, indulges in the lowest forms of pleasure, and behaves in the end like an animal. And it all comes from lying – lying to others and to yourself.” – Fyodor Dostoevsky – Russian novelist, essayist and journalist

Fyodor Mikhailovich Dostoevsky (November 11, 1821 – February 9, 1881) was a Russian novelist, essayist, and journalist who explored the depths of the human psyche with unflinching honesty. Born in Moscow to a family of modest means, Dostoevsky’s early life was marked by the emotional distance of his parents and an eventual tragedy when his father was murdered. He trained as a military engineer but pursued literature with relentless ambition, achieving early success with novels such as Poor Folk and The Double.

Dostoevsky’s life took a dramatic turn in 1849 when he was arrested for participating in a radical intellectual group. Sentenced to death, he faced a mock execution before his sentence was commuted to four years of hard labor in Siberia followed by military service. This harrowing experience, combined with his life among Russia’s poor, profoundly shaped his worldview and writing. His later years were marked by personal loss—the deaths of his first wife and his brother—and financial hardship, yet he produced some of literature’s greatest works during this time, including Crime and Punishment, The Idiot, Devils, and The Brothers Karamazov.

Dostoevsky’s writings are celebrated for their psychological insight and existential depth. He scrutinized themes of morality, free will, faith, and the consequences of self-deception—topics that continue to resonate in philosophy, theology, and modern psychology. His funeral drew thousands, reflecting his status as a national hero and one of Russia’s most influential thinkers.

Context of the Quote

The quoted passage is widely attributed to Dostoevsky, most notably appearing in The Brothers Karamazov, his final and perhaps most philosophically ambitious novel. The novel, published in serial form shortly before his death, wrestles with questions of faith, doubt, and the consequences of living a lie.

The quote is spoken by the Elder Zosima, a wise and compassionate monk in the novel. Zosima’s teachings in The Brothers Karamazov frequently address the dangers of self-deception and the importance of spiritual and moral honesty. In this passage, Dostoevsky is warning that lying to oneself is not merely a moral failing, but a fundamental corruption of perception and being. The progression—from dishonesty to self-deception, to the loss of respect for oneself and others, and ultimately to the decay of love and humanity—paints a stark picture of spiritual decline.

This theme is central to Dostoevsky’s work: characters who deceive themselves often spiral into psychological and moral crises. Dostoevsky saw truth—even when painful—as a prerequisite for authentic living. His novels repeatedly show how lies, whether to oneself or others, lead to alienation, suffering, and a loss of authentic connection.

Leading Theorists on Self-Deception

While Dostoevsky is renowned in literature for his treatment of self-deception, the theme has also been explored by philosophers, psychologists, and sociologists. Below is a brief overview of leading theorists and their contributions:

Philosophers

  • Søren Kierkegaard (1813–1855): The Danish philosopher explored the idea of existential self-deception, particularly in The Sickness Unto Death, where he describes how humans avoid the despair of being true to themselves by living inauthentic lives, what he calls “despair in weakness.”
  • Jean-Paul Sartre (1905–1980): In Being and Nothingness, Sartre popularized the concept of “bad faith” (mauvaise foi), the act of deceiving oneself to avoid the anxiety of freedom and responsibility. Sartre’s ideas are often seen as a philosophical counterpart to Dostoevsky’s literary explorations.
  • Friedrich Nietzsche (1844–1900): Nietzsche’s concept of “resentment” and the “will to power” also touches on self-deception, particularly how individuals and societies construct false narratives to justify their weaknesses or desires.

Psychologists

  • Sigmund Freud (1856–1939): Freud introduced the idea of defence mechanisms, such as denial and rationalization, as ways the psyche protects itself from uncomfortable truths—essentially systematizing the process of self-deception.
  • Donald Winnicott (1896–1971): The psychoanalyst discussed the “false self,” a persona developed to comply with external demands, often leading to inner conflict and emotional distress.
  • Erich Fromm (1900–1980): Fromm, like Dostoevsky, examined how modern society encourages escape from freedom and the development of “automaton conformity,” where individuals conform to avoid anxiety and uncertainty.

Modern Thinkers

  • Dan Ariely (b. 1967): The behavioural economist has shown experimentally how dishonesty often begins with small, self-serving lies that gradually erode ethical boundaries.
  • Robert Trivers (b. 1943): The evolutionary biologist proposed that self-deception evolved as a strategy to better deceive others, which ironically can make personal delusions more convincing.

Legacy and Insight

Dostoevsky’s insights into the dangers of self-deception remain remarkably relevant today. His work, together with that of philosophers and psychologists, invites reflection on the necessity of honesty—not just to others, but to oneself—for psychological health and authentic living. The consequences of failing this honesty, as Dostoevsky depicts, are not merely moral, but existential: they impact our ability to respect, love, and ultimately, to live fully human lives.

By placing this quote in context, we see not only the literary brilliance of Dostoevsky but also the enduring wisdom of his diagnosis of the human condition—a call to self-awareness that echoes through generations and disciplines.

read more
Quote: Dee Hock

Quote: Dee Hock

“An organisation, no matter how well designed, is only as good as the people who live and work in it.” – Dee Hock

read more
Quote: James Cash Penney

Quote: James Cash Penney

“The keystone of successful business is cooperation. Friction retards progress.” – James Cash Penney

read more
Quote: Paul J Meyer

Quote: Paul J Meyer

“Communication – the human connection – is the key to personal and career success.” – Paul J. Meyer

read more
Quote: Beverly Sills

Quote: Beverly Sills

“You may be disappointed if you fail, but you are doomed if you don’t try”. – Beverly Sills

read more
Quote: Marc Benioff

Quote: Marc Benioff

“Innovation is not a destination; it’s a journey.” – Marc Benioff

read more
Quote: Kristen Hadeed

Quote: Kristen Hadeed

“Lessons in leadership: Own your mistakes, celebrate your successes, and know your strengths.” – Kristen Hadeed

read more
Quote: Guy Kawasaki

Quote: Guy Kawasaki

“The goal is to provide inspiring information that moves people to action.” – Guy Kawasaki

read more
Quote: Daisy Gallagher

Quote: Daisy Gallagher

“Leadership is not only a title, it is a conviction to do the right thing and lead by example to those we serve.” – Daisy Gallagher

read more
Quote: Brian Tracy

Quote: Brian Tracy

“The only real limitation on your abilities is the level of your desires. If you want it badly enough, there are no limits on what you can achieve. ” – Brian Tracy

read more
Quote: Marc Benioff

Quote: Marc Benioff

“There is no finish line when it comes to system reliability and availability, and our efforts to improve performance never cease.” – Marc Benioff

read more
Quote: Jack Ma

Quote: Jack Ma

“If you don’t give up, you still have a chance. Giving up is the greatest failure.” – Jack Ma

read more
Quote: Ralph Nader

Quote: Ralph Nader

“The more you talk, the less you’ll have to say. The more you listen, the more sensible will be what you say.” – Ralph Nader

read more
Quote: Warren Bennis

Quote: Warren Bennis

“More leaders have been made by accident, circumstance, sheer grit, or will than have been made by all the leadership courses put together.” – Warren Bennis

read more

Download brochure

Introduction brochure

What we do, case studies and profiles of some of our amazing team.

Download

Our latest podcasts on Spotify

Sign up for our newsletters - free

Global Advisors | Quantified Strategy Consulting