Select Page

Global Advisors | Quantified Strategy Consulting

Nvidia
Quote: Alex Karp – Palantir CEO

Quote: Alex Karp – Palantir CEO

“The idea that chips and ontology is what you want to short is batsh*t crazy.” – Alex Karp -Palantir CEO

Alex Karp, co-founder and CEO of Palantir Technologies, delivered the now widely-circulated statement, “The idea that chips and ontology is what you want to short is batsh*t crazy,” in response to famed investor Michael Burry’s high-profile short positions against both Palantir and Nvidia. This sharp retort came at a time when Palantir, an enterprise software and artificial intelligence (AI) powerhouse, had just reported record earnings and was under intense media scrutiny for its meteoric stock rise and valuation.

Context of the Quote

The remark was made in early November 2025 during a CNBC interview, following public disclosures that Michael Burry—of “The Big Short” fame—had taken massive short positions in Palantir and Nvidia, two companies at the heart of the AI revolution. Burry’s move, reminiscent of his contrarian bets during the 2008 financial crisis, was interpreted by the market as both a challenge to the soaring “AI trade” and a critique of the underlying economics fueling the sector’s explosive growth.

Karp’s frustration was palpable: not only was Palantir producing what he described as “anomalous” financial results—outpacing virtually all competitors in growth, cash flow, and customer retention—but it was also emerging as the backbone of data-driven operations across government and industry. For Karp, Burry’s short bet went beyond traditional market scepticism; it targeted firms, products (“chips” and “ontology”—the foundational hardware for AI and the architecture for structuring knowledge), and business models proven to be both technically indispensable and commercially robust. Karp’s rejection of the “short chips and ontology” thesis underscores his belief in the enduring centrality of the technologies underpinning the modern AI stack.

Backstory and Profile: Alex Karp

Alex Karp stands out as one of Silicon Valley’s true iconoclasts:

  • Background and Education: Born in New York City in 1967, Karp holds a philosophy degree from Haverford College, a JD from Stanford, and a PhD in social theory from Goethe University Frankfurt, where he studied under and wrote about the influential philosopher Jürgen Habermas. This rare academic pedigree—blending law, philosophy, and critical theory—deeply informs both his contrarian mindset and his focus on the societal impact of technology.
  • Professional Arc: Before founding Palantir in 2004 with Peter Thiel and others, Karp had forged a career in finance, running the London-based Caedmon Group. At Palantir, he crafted a unique culture and business model, combining a wellness-oriented, sometimes spiritual corporate environment with the hard-nosed delivery of mission-critical systems for Western security, defence, and industry.
  • Leadership and Philosophy: Karp is known for his outspoken, unconventional leadership. Unafraid to challenge both Silicon Valley’s libertarian ethos and what he views as the groupthink of academic and financial “expert” classes, he publicly identifies as progressive—yet separates himself from establishment politics, remaining both a supporter of the US military and a critic of mainstream left and right ideologies. His style is at once brash and philosophical, combining deep skepticism of market orthodoxy with a strong belief in the capacity of technology to deliver real-world, not just notional, value.
  • Palantir’s Rise: Under Karp, Palantir grew from a niche contractor to one of the world’s most important data analytics and AI companies. Palantir’s products are deeply embedded in national security, commercial analytics, and industrial operations, making the company essential infrastructure in the rapidly evolving AI economy.

Theoretical Background: ‘Chips’ and ‘Ontology’

Karp’s phrase pairs two of the foundational concepts in modern AI and data-driven enterprise:

  • Chips: Here, “chips” refers specifically to advanced semiconductors (such as Nvidia’s GPUs) that provide the computational horsepower essential for training and deploying cutting-edge machine learning models. The AI revolution is inseparable from advances in chip design, leading to historic demand for high-performance hardware.
  • Ontology: In computer and information science, “ontology” describes the formal structuring and categorising of knowledge—making data comprehensible, searchable, and actionable by algorithms. Robust ontologies enable organisations to unify disparate data sources, automate analytical reasoning, and achieve the “second order” efficiencies of AI at scale.

Leading theorists in the domain of ontology and AI include:

  • John McCarthy: A founder of artificial intelligence, McCarthy’s foundational work on formal logic and semantics laid groundwork for modern ontological structures in AI.
  • Tim Berners-Lee: Creator of the World Wide Web, Berners-Lee developed the Semantic Web, championing knowledge structuring via ontologies—enabling data to be machine-readable and all but indispensable for AI’s next leap.
  • Thomas Gruber: Known for his widely cited definition of ontology in AI as “a specification of a conceptualisation,” Gruber’s research shaped the field’s approach to standardising knowledge representations for complex applications.

In the chip space, the pioneering work of:

  • Jensen Huang: CEO and co-founder of Nvidia, drove the company’s transformation from graphics to AI acceleration, cementing the centrality of chips as the hardware substrate for everything from generative AI to advanced analytics.
  • Gordon Moore and Robert Noyce: Their early explorations in semiconductor fabrication set the stage for the exponential hardware progress that enabled the modern AI era.

Insightful Context for the Modern Market Debate

The “chips and ontology” remark reflects a deep divide in contemporary technology investing:

  • On one side, sceptics like Burry see signs of speculative excess, reminiscent of prior bubbles, and bet against companies with high valuations—even when those companies dominate core technologies fundamental to AI.
  • On the other, leaders like Karp argue that while the broad “AI trade” risks pockets of overvaluation, the engine—the computational hardware (chips) and data-structuring logic (ontology)—are not just durable, but irreplaceable in the digital economy.

With Palantir and Nvidia at the centre of the current AI-driven transformation, Karp’s comment captures not just a rebuttal to market short-termism, but a broader endorsement of the foundational technologies that define the coming decade. The value of “chips and ontology” is, in Karp’s eyes, anchored not in market narrative but in empirical results and business necessity—a perspective rooted in a unique synthesis of philosophy, technology, and radical pragmatism.

read more
Quote: Jensen Huang – CEO Nvidia

Quote: Jensen Huang – CEO Nvidia

“Oftentimes, if you reason about things from first principles, what’s working today incredibly well — if you could reason about it from first principles and ask yourself on what foundation that first principle is built and how that would change over time — it allows you to hopefully see around corners.” – Jensen Huang – CEO Nvidia

Jensen Huang’s quote was delivered in the context of an in-depth dialogue with institutional investors on the trajectory of Nvidia, the evolution of artificial intelligence, and strategies for anticipating and shaping the technological future.

Context of the Quote

The quote was made during an interview at a Citadel Securities event in October 2025, hosted by Konstantine Buhler, a partner at Sequoia Capital. The dialogue’s audience consisted of leading institutional investors, all seeking avenues for sustainable advantage or ‘edge’. The conversation explored the founding moments of Nvidia in the early 1990s, through the reinvention of the graphics processing unit (GPU), the creation of new computing markets, and the subsequent rise of Nvidia as the platform underpinning the global AI boom. The question of how to ‘see around corners’ — to anticipate technology and industry shifts before they crystallise for others — was at the core of the discussion. Huang’s answer, invoking first-principles reasoning, linked Nvidia’s success to its ability to continually revisit and challenge foundational assumptions, and to methodically project how they will be redefined by progress in science and technology.

Jensen Huang: Profile and Approach

Jensen Huang, born in Tainan, Taiwan in 1963, immigrated to the United States as a child, experiencing the formative challenges of cultural dislocation, financial hardship, and adversity. He obtained his undergraduate degree in electrical engineering from Oregon State University and a master’s from Stanford University. After working at AMD and LSI Logic, he co-founded Nvidia in 1993 at 30, reportedly at a Denny’s restaurant. From the outset, the company faced daunting odds — neither established market nor assured funding, and frequent existential risk in the initial years.

Huang is distinguished not only by technical fluency — he is deeply involved in hardware and software architecture — but also by an ability to translate complexity for diverse audiences. He eschews corporate formality in favour of trademark leather jackets and a focus on product. His leadership style is marked by humility, a willingness to bet on emerging ideas, and what he describes as “urgent innovation” born of early near-failure. This disposition has been integral to Nvidia’s progress, especially as the company repeatedly “invented markets” and defined entirely new categories, such as accelerated computing and AI infrastructure.

By 2024, Nvidia became the world’s most valuable public company, with its GPUs foundational to gaming, scientific computing, and, critically, the rise of AI. Huang’s awards — from the IEEE Founder’s Medal to listing among Time Magazine’s 100 most influential — underscore his reputation as a technologist and strategic thinker. He is widely recognised for being able to establish technical direction well before it becomes market consensus, an approach reflected in the quote.

First-Principles Thinking: Theoretical Foundations

Huang’s endorsement of “first principles” echoes a method of problem-solving and innovation associated with thinkers as diverse as Aristotle, Isaac Newton, and, in the modern era, entrepreneurs and strategists such as Elon Musk. The essence of first-principles thinking is to break down complex systems to their most fundamental truths — concepts that cannot be deduced from anything simpler — and to reason forward from those axioms, unconstrained by traditional assumptions, analogies, or received wisdom.

  • Aristotle was the first to coin the term “first principles”, distinguishing knowledge derived from irreducible foundational truths from knowledge obtained through analogy or precedent.
  • René Descartes advocated for systematic doubt and logical rebuilding of knowledge from foundational elements.
  • Richard Feynman, the physicist, was famous for urging students to “understand from first principles”, encouraging deep understanding and avoidance of rote memorisation or mere pattern recognition.
  • Elon Musk is often cited as a contemporary example, applying first-principles thinking to industries as varied as automotive (Tesla), space (SpaceX), and energy. Musk has described the technique as “boiling things down to the most fundamental truths and then reasoning up from there,” directly influencing not just product architectures but also cost models and operational methods.

Application in Technology and AI

First-principles thinking is particularly powerful in periods of technological transition:

  • In computing, first principles were invoked by Carver Mead and Lynn Conway, who reimagined the semiconductor industry in the 1970s by establishing the foundational laws for microchip design, known as Mead-Conway methodology. This approach was cited by Huang as influential for predicting the physical limitations of transistor miniaturisation and motivating Nvidia’s focus on accelerated computing.
  • Clayton Christensen, cited by Huang as an influence, introduced the idea of disruptive innovation, arguing that market leaders must question incumbent logic and anticipate non-linear shifts in technology. His books on disruption and innovation strategy have shaped how leaders approach structural shifts and avoid the “innovator’s dilemma”.
  • The leap from von Neumann architectures to parallel, heterogenous, and ultimately AI-accelerated computing frameworks — as pioneered by Nvidia’s CUDA platform and deep learning libraries — was possible because leaders at Nvidia systematically revisited underlying assumptions about how computation should be structured for new workloads, rather than simply iterating on the status quo.
  • The AI revolution itself was catalysed by the “deep learning” paradigm, championed by Geoffrey Hinton, Yann LeCun, and Andrew Ng. Each demonstrated that previous architectures, which had reached plateaus, could be superseded by entirely new approaches, provided there was willingness to reinterpret the problem from mathematical and computational fundamentals.

Backstory of the Leading Theorists

The ecosystem that enabled Nvidia’s transformation is shaped by a series of foundational theorists:

  • Mead and Conway: Their 1979 textbook and methodologies codified the “first-principles” approach in chip design, allowing for the explosive growth of Silicon Valley’s fabless innovation model.
  • Gordon Moore: Moore’s Law, while originally an empirical observation, inspired decades of innovation, but its eventual slow-down prompted leaders such as Huang to look for new “first principles” to govern progress, beyond mere transistor scaling.
  • Clayton Christensen: His disruption theory is foundational in understanding why entire industries fail to see the next shift — and how those who challenge orthodoxy from first principles are able to “see around corners”.
  • Geoffrey Hinton, Yann LeCun, Andrew Ng: These pioneers directly enabled the deep learning revolution by returning to first principles on how learning — both human and artificial — could function at scale. Their work with neural networks, widely doubted after earlier “AI winters”, was vindicated with landmark results like AlexNet (2012), enabled by Nvidia GPUs.

Implications

Jensen Huang’s quote is neither idle philosophy nor abstract advice — it is a methodology proven repeatedly by his own journey and by the history of technology. It is a call to scrutinise assumptions, break complex structures to their most elemental truths, and reconstruct strategy consciously from the bedrock of what is not likely to change, but also to ask: on what foundation do these principles rest, and how will these foundations themselves evolve.

Organisations and individuals who internalise this approach are equipped not only to compete in current markets, but to invent new ones — to anticipate and shape the next paradigm, rather than reacting to it.

read more
Quote: Yann Lecun

Quote: Yann Lecun

“Most of the infrastructure cost for AI is for inference: serving AI assistants to billions of people.”
— Yann LeCun, VP & Chief AI Scientist at Meta

Yann LeCun made this comment in response to the sharp drop in Nvidia’s share price on January 27, 2024, following the launch of Deepseek R1, a new AI model developed by Deepseek AI. This model was reportedly trained at a fraction of the cost incurred by Hyperscalers like OpenAI, Anthropic, and Google DeepMind, raising questions about whether Nvidia’s dominance in AI compute was at risk.

The market reaction stemmed from speculation that the training costs of cutting-edge AI models—previously seen as a key driver of Nvidia’s GPU demand—could decrease significantly with more efficient methods. However, LeCun pointed out that most AI infrastructure costs come not from training but from inference, the process of running AI models at scale to serve billions of users. This suggests that Nvidia’s long-term demand may remain strong, as inference still relies heavily on high-performance GPUs.

LeCun’s view aligned with analyses from key AI investors and industry leaders. He supported the argument made by Antoine Blondeau, co-founder of Alpha Intelligence Capital, who described Nvidia’s stock drop as “vastly overblown” and “NOT a ‘Sputnik moment’”, referencing the concern that Nvidia’s market position was insecure. Additionally, Jonathan Ross, founder of Groq, shared a video titled “Why $500B isn’t enough for AI,” explaining why AI compute demand remains insatiable despite efficiency gains.

This discussion underscores a critical aspect of AI economics: while training costs may drop with better algorithms and hardware, the sheer scale of inference workloads—powering AI assistants, chatbots, and generative models for billions of users—remains a dominant and growing expense. This supports the case for sustained investment in AI infrastructure, particularly in Nvidia’s GPUs, which continue to be the gold standard for inference at scale.

read more
Quote: Marc Andreessen

Quote: Marc Andreessen

“DeepSeek-R1 is AI’s Sputnik moment.” – Marc Andreessen, Andreesen Horowitz

In a 27th January 2025 X statement that sent shockwaves through the tech community, venture capitalist Marc Andreessen declared that DeepSeek’s R1 AI reasoning model is “AI’s Sputnik moment.” This analogy draws parallels between China’s breakthrough in artificial intelligence and the Soviet Union’s historic achievement of launching the first satellite into orbit in 1957.

The Rise of DeepSeek-R1

DeepSeek, a Chinese AI lab, has made headlines with its open-source release of R1, a revolutionary AI reasoning model that is not only more cost-efficient but also poses a significant threat to the dominance of Western tech giants. The model’s ability to reduce compute requirements by half without sacrificing accuracy has sent shockwaves through the industry.

A New Era in AI

The release of DeepSeek-R1 marks a turning point in the AI arms race, as it challenges the long-held assumption that only a select few companies can compete in this space. By making its research open-source, DeepSeek is empowering anyone to build their own version of R1 and tailor it to their needs.

Implications for Megacap Stocks

The success of DeepSeek-R1 has significant implications for megacap stocks like Microsoft, Alphabet, and Amazon, which have long relied on proprietary AI models to maintain their technological advantage. The pen-source nature of R1 threatens to wipe out this advantage, potentially disrupting the business models of these tech giants.

Nvidia’s Nightmare

The news comes as a blow to Nvidia CEO Jensen Huang, who is ramping up production of his Blackwell microchip, a more advanced version of his industry-leading Hopper series H100s. The chip controls 90% of the AI semiconductor market, but R1’s ability to reduce compute requirements may render these chips less essential.

A New Era of Innovation

Perplexity AI founder Aravind Srinivas praised DeepSeek’s team for catching up to the West by employing clever solutions, including switching from binary encoding to floating point 8. This innovation not only reduces costs but also demonstrates that China is no longer just a copycat, but a leader in AI innovation.

read more
Quote: Jeffrey Emanuel

Quote: Jeffrey Emanuel

“With R1, DeepSeek essentially cracked one of the holy grails of AI: getting models to reason step-by-step without relying on massive supervised datasets.” – Jeffrey Emanuel

Jeffrey Emanuel’s statement (“The Short Case for Nvidia Stock” – 25th January 2025) highlights a groundbreaking achievement in AI with DeepSeek’s R1 model, which has made significant strides in enabling step-by-step reasoning without the traditional reliance on vast supervised datasets:

  1. Innovation Through Reinforcement Learning (RL):
    • The R1 model employs reinforcement learning, a method where models learn through trial and error with feedback. This approach reduces the dependency on large labeled datasets typically required for training, making it more efficient and accessible.
  2. Advanced Reasoning Capabilities:
    • R1 excels in tasks requiring logical inference and mathematical problem-solving. Its ability to demonstrate step-by-step reasoning is crucial for complex decision-making processes, applicable across various industries from autonomous systems to intricate problem-solving tasks.
  3. Efficiency and Accessibility:
    • By utilizing RL and knowledge distillation techniques, R1 efficiently transfers learning to smaller models. This democratizes AI technology, allowing global researchers and developers to innovate without proprietary barriers, thus expanding the reach of advanced AI solutions.
  4. Impact on Data-Scarce Industries:
    • The model’s capability to function with limited data is particularly beneficial in sectors like medicine and finance, where labeled data is scarce due to privacy concerns or high costs. This opens doors for more ethical and feasible AI applications in these fields.
  5. Competitive Landscape and Innovation:
    • R1 positions itself as a competitor to models like OpenAI’s o1, signaling a shift towards accessible AI technology. This fosters competition and encourages other companies to innovate similarly, driving advancements across the AI landscape.

In essence, DeepSeek’s R1 model represents a significant leap in AI efficiency and accessibility, offering profound implications for various industries by reducing data dependency and enhancing reasoning capabilities.

read more
Quote: Jensen Huang

Quote: Jensen Huang

“Software is eating the world, but AI is going to eat software.”

Jensen Huang
CEO, Nvidia

read more
Quote: Jensen Huang

Quote: Jensen Huang

“The most powerful technologies are the ones that empower others.”

Jensen Huang
CEO, Nvidia

read more
Quote: Jensen Huang

Quote: Jensen Huang

“Never stop asking questions and seeking answers. Curiosity fuels progress.”

Jensen Huang
CEO, Nvidia

read more
Quote: Jensen Huang

Quote: Jensen Huang

“Smart people focus on the right things.”

Jensen Huang
CEO, Nvidia

read more

Download brochure

Introduction brochure

What we do, case studies and profiles of some of our amazing team.

Download

Our latest podcasts on Spotify

Sign up for our newsletters - free

Global Advisors | Quantified Strategy Consulting