Select Page

Global Advisors | Quantified Strategy Consulting

Nvidia
Quote: Yann Lecun

Quote: Yann Lecun

“Most of the infrastructure cost for AI is for inference: serving AI assistants to billions of people.”
— Yann LeCun, VP & Chief AI Scientist at Meta

Yann LeCun made this comment in response to the sharp drop in Nvidia’s share price on January 27, 2024, following the launch of Deepseek R1, a new AI model developed by Deepseek AI. This model was reportedly trained at a fraction of the cost incurred by Hyperscalers like OpenAI, Anthropic, and Google DeepMind, raising questions about whether Nvidia’s dominance in AI compute was at risk.

The market reaction stemmed from speculation that the training costs of cutting-edge AI models—previously seen as a key driver of Nvidia’s GPU demand—could decrease significantly with more efficient methods. However, LeCun pointed out that most AI infrastructure costs come not from training but from inference, the process of running AI models at scale to serve billions of users. This suggests that Nvidia’s long-term demand may remain strong, as inference still relies heavily on high-performance GPUs.

LeCun’s view aligned with analyses from key AI investors and industry leaders. He supported the argument made by Antoine Blondeau, co-founder of Alpha Intelligence Capital, who described Nvidia’s stock drop as “vastly overblown” and “NOT a ‘Sputnik moment’”, referencing the concern that Nvidia’s market position was insecure. Additionally, Jonathan Ross, founder of Groq, shared a video titled “Why $500B isn’t enough for AI,” explaining why AI compute demand remains insatiable despite efficiency gains.

This discussion underscores a critical aspect of AI economics: while training costs may drop with better algorithms and hardware, the sheer scale of inference workloads—powering AI assistants, chatbots, and generative models for billions of users—remains a dominant and growing expense. This supports the case for sustained investment in AI infrastructure, particularly in Nvidia’s GPUs, which continue to be the gold standard for inference at scale.

read more
Quote: Marc Andreessen

Quote: Marc Andreessen

“DeepSeek-R1 is AI’s Sputnik moment.” – Marc Andreessen, Andreesen Horowitz

In a 27th January 2025 X statement that sent shockwaves through the tech community, venture capitalist Marc Andreessen declared that DeepSeek’s R1 AI reasoning model is “AI’s Sputnik moment.” This analogy draws parallels between China’s breakthrough in artificial intelligence and the Soviet Union’s historic achievement of launching the first satellite into orbit in 1957.

The Rise of DeepSeek-R1

DeepSeek, a Chinese AI lab, has made headlines with its open-source release of R1, a revolutionary AI reasoning model that is not only more cost-efficient but also poses a significant threat to the dominance of Western tech giants. The model’s ability to reduce compute requirements by half without sacrificing accuracy has sent shockwaves through the industry.

A New Era in AI

The release of DeepSeek-R1 marks a turning point in the AI arms race, as it challenges the long-held assumption that only a select few companies can compete in this space. By making its research open-source, DeepSeek is empowering anyone to build their own version of R1 and tailor it to their needs.

Implications for Megacap Stocks

The success of DeepSeek-R1 has significant implications for megacap stocks like Microsoft, Alphabet, and Amazon, which have long relied on proprietary AI models to maintain their technological advantage. The pen-source nature of R1 threatens to wipe out this advantage, potentially disrupting the business models of these tech giants.

Nvidia’s Nightmare

The news comes as a blow to Nvidia CEO Jensen Huang, who is ramping up production of his Blackwell microchip, a more advanced version of his industry-leading Hopper series H100s. The chip controls 90% of the AI semiconductor market, but R1’s ability to reduce compute requirements may render these chips less essential.

A New Era of Innovation

Perplexity AI founder Aravind Srinivas praised DeepSeek’s team for catching up to the West by employing clever solutions, including switching from binary encoding to floating point 8. This innovation not only reduces costs but also demonstrates that China is no longer just a copycat, but a leader in AI innovation.

read more
Quote: Jeffrey Emanuel

Quote: Jeffrey Emanuel

“With R1, DeepSeek essentially cracked one of the holy grails of AI: getting models to reason step-by-step without relying on massive supervised datasets.” – Jeffrey Emanuel

Jeffrey Emanuel’s statement (“The Short Case for Nvidia Stock” – 25th January 2025) highlights a groundbreaking achievement in AI with DeepSeek’s R1 model, which has made significant strides in enabling step-by-step reasoning without the traditional reliance on vast supervised datasets:

  1. Innovation Through Reinforcement Learning (RL):
    • The R1 model employs reinforcement learning, a method where models learn through trial and error with feedback. This approach reduces the dependency on large labeled datasets typically required for training, making it more efficient and accessible.
  2. Advanced Reasoning Capabilities:
    • R1 excels in tasks requiring logical inference and mathematical problem-solving. Its ability to demonstrate step-by-step reasoning is crucial for complex decision-making processes, applicable across various industries from autonomous systems to intricate problem-solving tasks.
  3. Efficiency and Accessibility:
    • By utilizing RL and knowledge distillation techniques, R1 efficiently transfers learning to smaller models. This democratizes AI technology, allowing global researchers and developers to innovate without proprietary barriers, thus expanding the reach of advanced AI solutions.
  4. Impact on Data-Scarce Industries:
    • The model’s capability to function with limited data is particularly beneficial in sectors like medicine and finance, where labeled data is scarce due to privacy concerns or high costs. This opens doors for more ethical and feasible AI applications in these fields.
  5. Competitive Landscape and Innovation:
    • R1 positions itself as a competitor to models like OpenAI’s o1, signaling a shift towards accessible AI technology. This fosters competition and encourages other companies to innovate similarly, driving advancements across the AI landscape.

In essence, DeepSeek’s R1 model represents a significant leap in AI efficiency and accessibility, offering profound implications for various industries by reducing data dependency and enhancing reasoning capabilities.

read more
Quote: Jensen Huang

Quote: Jensen Huang

“Software is eating the world, but AI is going to eat software.”

Jensen Huang
CEO, Nvidia

read more
Quote: Jensen Huang

Quote: Jensen Huang

“The most powerful technologies are the ones that empower others.”

Jensen Huang
CEO, Nvidia

read more
Quote: Jensen Huang

Quote: Jensen Huang

“Never stop asking questions and seeking answers. Curiosity fuels progress.”

Jensen Huang
CEO, Nvidia

read more
Quote: Jensen Huang

Quote: Jensen Huang

“Smart people focus on the right things.”

Jensen Huang
CEO, Nvidia

read more

Download brochure

Introduction brochure

What we do, case studies and profiles of some of our amazing team.

Download

Our latest podcasts on Spotify

Sign up for our newsletters - free

Global Advisors | Quantified Strategy Consulting