Select Page

Global Advisors | Quantified Strategy Consulting

bubble
Quote: Alex Karp – Palantir CEO

Quote: Alex Karp – Palantir CEO

“The idea that chips and ontology is what you want to short is batsh*t crazy.” – Alex Karp -Palantir CEO

Alex Karp, co-founder and CEO of Palantir Technologies, delivered the now widely-circulated statement, “The idea that chips and ontology is what you want to short is batsh*t crazy,” in response to famed investor Michael Burry’s high-profile short positions against both Palantir and Nvidia. This sharp retort came at a time when Palantir, an enterprise software and artificial intelligence (AI) powerhouse, had just reported record earnings and was under intense media scrutiny for its meteoric stock rise and valuation.

Context of the Quote

The remark was made in early November 2025 during a CNBC interview, following public disclosures that Michael Burry—of “The Big Short” fame—had taken massive short positions in Palantir and Nvidia, two companies at the heart of the AI revolution. Burry’s move, reminiscent of his contrarian bets during the 2008 financial crisis, was interpreted by the market as both a challenge to the soaring “AI trade” and a critique of the underlying economics fueling the sector’s explosive growth.

Karp’s frustration was palpable: not only was Palantir producing what he described as “anomalous” financial results—outpacing virtually all competitors in growth, cash flow, and customer retention—but it was also emerging as the backbone of data-driven operations across government and industry. For Karp, Burry’s short bet went beyond traditional market scepticism; it targeted firms, products (“chips” and “ontology”—the foundational hardware for AI and the architecture for structuring knowledge), and business models proven to be both technically indispensable and commercially robust. Karp’s rejection of the “short chips and ontology” thesis underscores his belief in the enduring centrality of the technologies underpinning the modern AI stack.

Backstory and Profile: Alex Karp

Alex Karp stands out as one of Silicon Valley’s true iconoclasts:

  • Background and Education: Born in New York City in 1967, Karp holds a philosophy degree from Haverford College, a JD from Stanford, and a PhD in social theory from Goethe University Frankfurt, where he studied under and wrote about the influential philosopher Jürgen Habermas. This rare academic pedigree—blending law, philosophy, and critical theory—deeply informs both his contrarian mindset and his focus on the societal impact of technology.
  • Professional Arc: Before founding Palantir in 2004 with Peter Thiel and others, Karp had forged a career in finance, running the London-based Caedmon Group. At Palantir, he crafted a unique culture and business model, combining a wellness-oriented, sometimes spiritual corporate environment with the hard-nosed delivery of mission-critical systems for Western security, defence, and industry.
  • Leadership and Philosophy: Karp is known for his outspoken, unconventional leadership. Unafraid to challenge both Silicon Valley’s libertarian ethos and what he views as the groupthink of academic and financial “expert” classes, he publicly identifies as progressive—yet separates himself from establishment politics, remaining both a supporter of the US military and a critic of mainstream left and right ideologies. His style is at once brash and philosophical, combining deep skepticism of market orthodoxy with a strong belief in the capacity of technology to deliver real-world, not just notional, value.
  • Palantir’s Rise: Under Karp, Palantir grew from a niche contractor to one of the world’s most important data analytics and AI companies. Palantir’s products are deeply embedded in national security, commercial analytics, and industrial operations, making the company essential infrastructure in the rapidly evolving AI economy.

Theoretical Background: ‘Chips’ and ‘Ontology’

Karp’s phrase pairs two of the foundational concepts in modern AI and data-driven enterprise:

  • Chips: Here, “chips” refers specifically to advanced semiconductors (such as Nvidia’s GPUs) that provide the computational horsepower essential for training and deploying cutting-edge machine learning models. The AI revolution is inseparable from advances in chip design, leading to historic demand for high-performance hardware.
  • Ontology: In computer and information science, “ontology” describes the formal structuring and categorising of knowledge—making data comprehensible, searchable, and actionable by algorithms. Robust ontologies enable organisations to unify disparate data sources, automate analytical reasoning, and achieve the “second order” efficiencies of AI at scale.

Leading theorists in the domain of ontology and AI include:

  • John McCarthy: A founder of artificial intelligence, McCarthy’s foundational work on formal logic and semantics laid groundwork for modern ontological structures in AI.
  • Tim Berners-Lee: Creator of the World Wide Web, Berners-Lee developed the Semantic Web, championing knowledge structuring via ontologies—enabling data to be machine-readable and all but indispensable for AI’s next leap.
  • Thomas Gruber: Known for his widely cited definition of ontology in AI as “a specification of a conceptualisation,” Gruber’s research shaped the field’s approach to standardising knowledge representations for complex applications.

In the chip space, the pioneering work of:

  • Jensen Huang: CEO and co-founder of Nvidia, drove the company’s transformation from graphics to AI acceleration, cementing the centrality of chips as the hardware substrate for everything from generative AI to advanced analytics.
  • Gordon Moore and Robert Noyce: Their early explorations in semiconductor fabrication set the stage for the exponential hardware progress that enabled the modern AI era.

Insightful Context for the Modern Market Debate

The “chips and ontology” remark reflects a deep divide in contemporary technology investing:

  • On one side, sceptics like Burry see signs of speculative excess, reminiscent of prior bubbles, and bet against companies with high valuations—even when those companies dominate core technologies fundamental to AI.
  • On the other, leaders like Karp argue that while the broad “AI trade” risks pockets of overvaluation, the engine—the computational hardware (chips) and data-structuring logic (ontology)—are not just durable, but irreplaceable in the digital economy.

With Palantir and Nvidia at the centre of the current AI-driven transformation, Karp’s comment captures not just a rebuttal to market short-termism, but a broader endorsement of the foundational technologies that define the coming decade. The value of “chips and ontology” is, in Karp’s eyes, anchored not in market narrative but in empirical results and business necessity—a perspective rooted in a unique synthesis of philosophy, technology, and radical pragmatism.

read more
Quote: Sholto Douglas – Anthropic

Quote: Sholto Douglas – Anthropic

“People have said we’re hitting a plateau every month for three years… I look at how models are produced and every part could be improved. The training pipeline is primitive, held together by duct tape, best efforts, and late nights. There’s so much room to grow everywhere.” – Sholto Douglas – Anthropic

Sholto Douglas made the statement during a major public podcast interview in October 2025, coinciding with Anthropic’s release of Claude Sonnet 4.5—at the time, the world’s strongest and most “agentic” AI coding model. The comment specifically rebuts repeated industry and media assertions that large AI models have reached a ceiling or are slowing in progress. Douglas argues the opposite: that the field is in a phase of accelerating advancement, driven both by transformative hardware investment (“compute super-cycle”), new algorithmic techniques (particularly reinforcement learning and test-time compute), and the persistent “primitive” state of today’s AI engineering infrastructure.

He draws an analogy with early-stage, improvisational systems: the models are held together “by duct tape, best efforts, and late nights,” making clear that immense headroom for improvement remains at every level, from training data pipelines and distributed infrastructure to model architecture and reward design. As a result, every new benchmark and capability reveals further unrealised opportunity, with measurable progress charted month after month.

Douglas’s deeper implication is that claims of a plateau often arise from surface-level analysis or the “saturation” of public benchmarks, not from a rigorous understanding of what is technically possible or how much scale remains untapped across the technical stack.

Sholto Douglas: Career Trajectory and Perspective

Sholto Douglas is a leading member of Anthropic’s technical staff, focused on scaling reinforcement learning and agentic AI. His unconventional journey illustrates both the new talent paradigm and the nature of breakthrough AI research today:

  • Early Life and Mentorship: Douglas grew up in Australia, where he benefited from unusually strong academic and athletic mentorship. His mother, an accomplished physician frustrated by systemic barriers, instilled discipline and a systemic approach; his Olympic-level fencing coach provided a first-hand experience of how repeated, directed effort leads to world-class performance.
  • Academic Formation: He studied computer science and robotics as an undergraduate, with a focus on practical experimentation and a global mindset. A turning point was reading the “scaling hypothesis” for AGI, convincing him that progress on artificial general intelligence was feasible within a decade—and worth devoting his career to.
  • Independent Innovation: As a student, Douglas built “bedroom-scale” foundation models for robotics, working independently on large-scale data collection, simulation, and early adoption of transformer-based methods. This entrepreneurial approach—demonstrating initiative and technical depth without formal institutional backing—proved decisive.
  • Google (Gemini and DeepMind): His independent work brought him to Google, where he joined just before the release of ChatGPT, in time to witness and help drive the rapid unification and acceleration of Google’s AI efforts (Gemini, Brain, DeepMind). He co-designed new inference infrastructure that reduced costs and worked at the intersection of large-scale learning, reinforcement learning, and applied reasoning.
  • Anthropic (from 2025): Drawn by Anthropic’s focus on measurable, near-term economic impact and deep alignment work, Douglas joined to lead and scale reinforcement learning research—helping push the capability frontier for agentic models. He values a culture where every contributor understands and can articulate how their work advances both capability and safety in AI.

Douglas is distinctive for his advocacy of “taste” in AI research, favouring mechanistic understanding and simplicity over clever domain-specific tricks—a direct homage to Richard Sutton’s “bitter lesson.” This perspective shapes his belief that the greatest advances will come not from hiding complexity with hand-crafted heuristics, but from scaling general algorithms and rigorous feedback loops.

 

Intellectual and Scientific Context: The ‘Plateau’ Debate and Leading Theorists

The debate around the so-called “AI plateau” is best understood against the backdrop of core advances and recurring philosophical arguments in machine learning.

The “Bitter Lesson” and Richard Sutton

  • Richard Sutton (University of Alberta, DeepMind), one of the founding figures in reinforcement learning, crystallised the field’s “bitter lesson”: that general, scalable methods powered by increased compute will eventually outperform more elegant, hand-crafted, domain-specific approaches.
  • In practical terms, this means that the field’s recent leaps—from vision to language to coding—are powered less by clever new inductive biases, and more by architectural simplicity plus massive compute and data. Sutton has also maintained that real progress in AI will come from reinforcement learning with minimal task-specific assumptions and maximal data, computation, and feedback.

Yann LeCun and Alternative Paradigms

  • Yann LeCun (Meta, NYU), a pioneer of deep learning, has maintained that the transformer paradigm is limited and that fundamentally novel architectures are necessary for human-like reasoning and autonomy. He argues that unsupervised/self-supervised learning and new world-modelling approaches will be required.
  • LeCun’s disagreement with Sutton’s “bitter lesson” centres on the claim that scaling is not the final answer: new representation learning, memory, and planning mechanisms will be needed to reach AGI.

Shane Legg, Demis Hassabis, and DeepMind

  • DeepMind’s approach has historically been “science-first,” tackling a broad swathe of human intelligence challenges (AlphaGo, AlphaFold, science AI), promoting a research culture that takes long-horizon bets on new architectures (memory-augmented neural networks, world models, differentiable reasoning).
  • Demis Hassabis and Shane Legg (DeepMind co-founders) have advocated for testing a diversity of approaches, believing that the path to AGI is not yet clear—though they too acknowledge the value of massive scale and reinforcement learning.

The Scaling Hypothesis: GW’s Essay and the Modern Era

  • The so-called “scaling hypothesis”—the idea that simply making models larger and providing more compute and data will continue yielding improvements—has become the default “bet” for Anthropic, OpenAI, and others. Douglas refers directly to this intellectual lineage as the critical “hinge” moment that set his trajectory.
  • This hypothesis is now being extended into new areas, including agentic systems where long context, verification, memory, and reinforcement learning allow models to reliably pursue complex, multi-step goals semi-autonomously.
 

Summing Up: The Current Frontier

Today, researchers like Douglas are moving beyond the original transformer pre-training paradigm, leveraging multi-axis scaling (pre-training, RL, test-time compute), richer reward systems, and continuous experimentation to drive model capabilities in coding, digital productivity, and emerging physical domains (robotics and manipulation).

Douglas’s quote epitomises the view that not only has performance not plateaued—every “limitation” encountered is a signpost for further exponential improvement. The modest, “patchwork” nature of current AI infrastructure is a competitive advantage: it means there is vast room for optimisation, iteration, and compounding gains in capability.

As the field races into a new era of agentic AI and economic impact, his perspective serves as a grounded, inside-out refutation of technological pessimism and a call to action grounded in both technical understanding and relentless ambition.

read more
Quote: Julian Schrittwieser – Anthropic

Quote: Julian Schrittwieser – Anthropic

“The talk about AI bubbles seemed very divorced from what was happening in frontier labs and what we were seeing. We are not seeing any slowdown of progress.” – Julian Schrittwieser – Anthropic

Those closest to technical breakthroughs are witnessing a pattern of sustained, compounding advancement that is often underestimated by commentators and investors. This perspective underscores both the power and limitations of conventional intuitions regarding exponential technological progress.

 

Context of the Quote

Schrittwieser delivered these remarks in a 2025 interview on the MAD Podcast, prompted by widespread discourse on the so-called ‘AI bubble’. His key contention is that debate around an AI investment or hype “bubble” feels disconnected from the lived reality inside the world’s top research labs, where the practical pace of innovation remains brisk and outwardly undiminished. He outlines that, according to direct observation and internal benchmarks at labs such as Anthropic, progress remains on a highly consistent exponential curve: “every three to four months, the model is able to do a task that is twice as long as before completely on its own”.

He draws an analogy to the early days of COVID-19, where exponential growth was invisible until it became overwhelming; the same mathematical processes, Schrittwieser contends, apply to AI system capabilities. While public narratives about bubbles often reference the dot-com era, he highlights a bifurcation: frontier labs sustain robust, revenue-generating trajectories, while the wider AI ecosystem might experience bubble-like effects in valuation. But at the core—the technology itself continues to improve at a predictably exponential rate well supported by both qualitative experience and benchmark data.

Schrittwieser’s view, rooted in immediate, operational knowledge, is that the default expectation of a linear future is mistaken: advances in autonomy, reasoning, and productivity are compounding. This means genuinely transformative impacts—such as AI agents that function at expert level or beyond for extended, unsupervised tasks—are poised to arrive sooner than many anticipate.

 

Profile: Julian Schrittwieser

Julian Schrittwieser is one of the world’s leading artificial intelligence researchers, currently based at Anthropic, following a decade as a core scientist at Google DeepMind. Raised in rural Austria, Schrittwieser’s journey from an adolescent fascination with game programming to the vanguard of AI research exemplifies the discipline’s blend of curiosity, mathematical rigour, and engineering prowess. He studied computer science at the Vienna University of Technology, before interning at Google.

Schrittwieser was a central contributor to several historic machine learning milestones, most notably:

 
  • AlphaGo, the first program to defeat a world champion at Go, combining deep neural networks with Monte Carlo Tree Search.
  • AlphaGo Zero and AlphaZero, which generalised the approach to achieve superhuman performance without human examples, through self-play—demonstrating true generality in reinforcement learning.
  • MuZero (as lead author), solving the challenge of mastering environments without even knowing the rules in advance, by enabling the system to learn its own internal, predictive world models—an innovation bringing RL closer to complex, real-world domains.
  • Later work includes AlphaCode (code synthesis), AlphaTensor (algorithmic discovery), and applied advances in Gemini and AlphaProof.

At Anthropic, Schrittwieser is at the frontier of research into scaling laws, reinforcement learning, autonomous agents, and novel techniques for alignment and safety in next-generation AI. True to his pragmatic ethos, he prioritises what directly raises capability and reliability, and advocates for careful, data-led extrapolation rather than speculation.

 

Theoretical Backstory: Exponential AI Progress and Key Thinkers

Schrittwieser’s remarks situate him within a tradition of AI theorists and builders focused on scaling laws, reinforcement learning (RL), and emergent capabilities:

Leading Theorists and Historical Perspective

Name
Notable Ideas and Contributions
Relevance to Quote
Demis Hassabis
Founder of DeepMind; architect of the AlphaGo programme. Emphasised general intelligence and the power of RL plus planning.
Schrittwieser’s mentor and DeepMind leader. Pioneered RL paradigms beyond games.
David Silver
Developed many of the breakthroughs underlying AlphaGo, AlphaZero, MuZero. Advanced RL and model-based search methods.
Collaborator with Schrittwieser; together, demonstrated practical scaling of RL.
Richard Sutton
Articulated reinforcement learning’s centrality: “The Bitter Lesson” (general methods, scalable computation, not handcrafted). Advanced temporal difference methods and RL theory.
Mentioned by Schrittwieser as a thought leader shaping the RL paradigm at scale.
Alex Ray, Jared Kaplan, Sam McCandlish, OpenAI Scaling Team
Quantified AI’s “scaling laws”: empirical tendencies for model performance to improve smoothly with compute, data, and parameter scaling.
Schrittwieser echoes this data-driven, incrementalist philosophy.
Ilya Sutskever
Co-founder of OpenAI; central to deep learning breakthroughs, scaling, and forecasting emergent capabilities.
OpenAI’s work on benchmarks (GDP-Val) and scaling echoes these insights.

These thinkers converge on several key observations directly reflected in Schrittwieser’s view:

  • Exponential Capability Curves: Consistent advances in performance often surprise those outside the labs due to our poor intuitive grasp of exponentiality—what Schrittwieser terms a repeated “failure to understand the exponential”.
  • Scaling Laws and Reinforcement Learning: Improvements are not just about larger models, but ever-better training, more reliable reinforcement learning, agentic architecture, and robust reward systems—developments Schrittwieser’s work epitomises.
  • Novelty and Emergence: Historically, theorists doubted whether neural models could go beyond sophisticated mimicry; the “Move 37” moment (AlphaGo’s unprecedented move in Go) was a touchstone for true machine creativity, a theme Schrittwieser stresses remains highly relevant today.
  • Bubbles, Productivity, and Market Cycles: Mainstream financial and social narratives may oscillate dramatically, but real capability growth—observable in benchmarks and direct use—has historically marched on undeterred by speculative excesses.
 

Synthesis: Why the Perspective Matters

The quote foregrounds a gap between external perceptions and insider realities. Pioneers like Schrittwieser and his cohort stress that transformative change will not follow a smooth, linear or hype-driven curve, but an exponential, data-backed progression—one that may defy conventional intuition, but is already reshaping productivity and the structure of work.

This moment is not about “irrational exuberance”, but rather the compounding product of theoretical insight, algorithmic audacity, and relentless engineering: the engine behind the next wave of economic and social transformation.

read more

Download brochure

Introduction brochure

What we do, case studies and profiles of some of our amazing team.

Download

Our latest podcasts on Spotify

Sign up for our newsletters - free

Global Advisors | Quantified Strategy Consulting