Select Page

Global Advisors | Quantified Strategy Consulting

China
Quote: Sholto Douglas, Anthropic researcher

Quote: Sholto Douglas, Anthropic researcher

“We believe coding is extremely important because coding is that first step in which you will see AI research itself being accelerated… We think it is the most important leading indicator of model capabilities.”

Sholto Douglas, Anthropic researcher

Sholto Douglas is regarded as one of the most promising new minds in artificial intelligence research. Having graduated from the University of Sydney with a degree in Mechatronic (Space) Engineering under the guidance of Ian Manchester and Stefan Williams, Douglas entered the field of AI less than two years ago, quickly earning respect for his innovative contributions. At Anthropic, one of the leading AI research labs, he specializes in scaling reinforcement learning (RL) techniques within advanced language models, focusing on pushing the boundaries of what large language models can learn and execute autonomously.

Context of the Quote

The quote, delivered by Douglas in an interview with Redpoint—a venture capital firm known for its focus on disruptive startups and technology—underscores the central thesis driving Anthropic’s recent research efforts:

“We believe coding is extremely important because coding is that first step in which you will see AI research itself being accelerated… We think [coding is] the most important leading indicator of model capabilities.”

This statement reflects both the technical philosophy and the strategic direction of Anthropic’s latest research. Douglas views coding not only as a pragmatic benchmark but as a foundational skill that unlocks model self-improvement and, by extension, accelerates progress toward artificial general intelligence (AGI).

Claude 4 Launch: Announcements and Impact

Douglas’ remarks came just ahead of the public unveiling of Anthropic’s Claude 4, the company’s most sophisticated model to date. The event highlighted several technical milestones:

  • Reinforcement Learning Breakthroughs: Douglas described how, over the past year, RL techniques in language models had evolved from experimental to demonstrably successful, especially in complex domains like competitive programming and advanced mathematics. For the first time, they achieved “proof of an algorithm that can give us expert human reliability and performance, given the right feedback loop”.
  • Long-Term Vision: The launch positioned coding proficiency as the “leading indicator” for broader model capabilities, setting the stage for future models that can meaningfully contribute to their own research and improvement.
  • Societal Implications: Alongside the technical announcements, the event and subsequent interviews addressed how rapid advances in AI—exemplified by Claude 4—will impact industries, labor markets, and global policy, urging stakeholders to prepare for a world where AI agents are not just tools but collaborative problem-solvers.
 

Why This Moment Matters

Douglas’ focus on coding as a metric is rooted in the idea that tasks requiring deep logic and creative problem-solving, such as programming, provide a “canary in the coal mine” for model sophistication. Success in these domains demonstrates a leap not only in computational power or data processing, but in the ability of AI models to autonomously reason, plan, and build tools that further accelerate their own learning cycles.

The Claude 4 launch, and Douglas’ role within it, marks a critical inflection point in AI research. The ability of language models to code at—or beyond—expert human levels signals the arrival of AI systems capable of iteratively improving themselves, raising both hopes for extraordinary breakthroughs and urgent questions around safety, alignment, and governance.

Sholto Douglas’ Influence

Though relatively new to the field, Douglas has emerged as a thought leader shaping Anthropic’s approach to scalable, interpretable, and safe AI. His insights bridge technical expertise and strategic foresight, providing a clear-eyed perspective on the trajectory of rapidly advancing language models and their potential to fundamentally reshape the future of research and innovation.

read more
Quote: Jensen Huang, Nvidia CEO

Quote: Jensen Huang, Nvidia CEO

“AI inference token generation has surged tenfold in just one year, and as AI agents become mainstream, the demand for AI computing will accelerate. Countries around the world are recognizing AI as essential infrastructure – just like electricity and the internet.”

Jensen Huang, Nvidia CEO

Context: The Nvidia 2026 Q1 results

On May 28, 2025, NVIDIA announced its financial results for the first quarter of fiscal year 2026, reporting a record-breaking revenue of $44,1 billion, a 69% increase from the previous year. This surge was primarily driven by robust demand for AI chips, with the data center segment contributing significantly, achieving a 73% year-over-year revenue increase to $39,1 billion.

Despite these impressive figures, NVIDIA faced challenges due to U.S. export restrictions on its H20 chips to China, resulting in a $4,5 billion charge for excess inventory and an anticipated $8 billion revenue loss in the second quarter. During the earnings call, Huang criticized these restrictions, stating they have inadvertently spurred innovation in China rather than curbing it.

In the context of these developments, Huang remarked, “AI inference token generation has surged tenfold in just one year, and as AI agents become mainstream, the demand for AI computing will accelerate. Countries around the world are recognizing AI as essential infrastructure—just like electricity and the internet.” This statement underscores the transformative impact of AI across various sectors and highlights the critical role of AI infrastructure in modern economies.

Under Huang’s leadership, NVIDIA has not only achieved remarkable financial success but has also been at the forefront of AI and computing innovations. His strategic vision continues to shape the company’s trajectory, navigating complex international dynamics while driving technological progress.

Jensen Huang: Visionary Leader Behind Nvidia

Early Life and Education

Jensen Huang, born in Tainan, Taiwan, in 1963, immigrated to the United States at a young age. He pursued his undergraduate studies in electrical engineering at Oregon State University, earning a Bachelor of Science degree, and later completed a Master of Science in Electrical Engineering at Stanford University. Before founding Nvidia, Huang gained industry experience at LSI Logic and Advanced Micro Devices (AMD), building a foundation in semiconductor technology and business leadership.

Founding Nvidia and Early Struggles

In 1993, at the age of 30, Huang co-founded Nvidia with Chris Malachowsky and Curtis Priem. The company’s inception was humble—its first meetings took place in a local Denny’s restaurant. The early years were marked by intense challenges and uncertainty. Nvidia’s initial focus on graphics accelerator chips nearly led to its demise, with the company surviving on a critical $5 million investment from Sega. By 1997, Nvidia was just a month away from running out of payroll funds before the release of the RIVA 128 chip turned its fortunes around.

Huang’s leadership style was forged in these difficult times. He often reminded his team, “Our company is thirty days from going out of business,” a mantra that underscored the urgency and resilience required to survive in Silicon Valley’s fast-paced environment. Huang has credited these hardships as essential to his growth as a leader and to Nvidia’s eventual success.

Transforming the Tech Landscape

Under Huang’s stewardship, Nvidia pioneered the invention of the Graphics Processing Unit (GPU) in 1999, revolutionizing computer graphics and catalyzing the growth of the PC gaming industry. More recently, Nvidia has become a central player in the rise of artificial intelligence (AI) and accelerated computing, with its hardware and software platforms powering breakthroughs in data centers, autonomous vehicles, and generative AI.

Huang’s vision and execution have earned him widespread recognition, including election to the National Academy of Engineering, the Semiconductor Industry Association’s Robert N. Noyce Award, the IEEE Founder’s Medal, and inclusion in TIME magazine’s list of the 100 most influential people.

read more
Quote: Jensen Huang, Nvidia CEO

Quote: Jensen Huang, Nvidia CEO

“The question is not whether China will have AI, it already does.”

Jensen Huang, Nvidia CEO

Context: The Nvidia 2026 Q1 results

On May 28, 2025, NVIDIA announced its financial results for the first quarter of fiscal year 2026, reporting a record-breaking revenue of $44,1 billion, a 69% increase from the previous year. This surge was primarily driven by robust demand for AI chips, with the data center segment contributing significantly, achieving a 73% year-over-year revenue increase to $39,1 billion.

Despite these impressive figures, NVIDIA faced challenges due to U.S. export restrictions on its H20 chips to China, resulting in a $4,5 billion charge for excess inventory and an anticipated $8 billion revenue loss in the second quarter. During the earnings call, Huang criticized these restrictions, stating they have inadvertently spurred innovation in China rather than curbing it.

Huang’s statement, “The question is not whether China will have AI, it already does,” underscores his perspective on the global AI landscape. He emphasized that export controls may not prevent technological advancements in China but could instead accelerate domestic innovation. This viewpoint reflects Huang’s broader understanding of the interconnectedness of global technology development and the challenges posed by geopolitical tensions. He followed by stating, “The question is whether one of the world’s largest AI markets will run on American platforms. Shielding Chinese chipmakers from U.S. competition only strengthens them abroad and weakens America’s position.”

Under Huang’s leadership, NVIDIA has not only achieved remarkable financial success but has also been at the forefront of AI and computing innovations. His strategic vision continues to shape the company’s trajectory, navigating complex international dynamics while driving technological progress.

Jensen Huang: Visionary Leader Behind Nvidia

Early Life and Education

Jensen Huang, born in Tainan, Taiwan, in 1963, immigrated to the United States at a young age. He pursued his undergraduate studies in electrical engineering at Oregon State University, earning a Bachelor of Science degree, and later completed a Master of Science in Electrical Engineering at Stanford University. Before founding Nvidia, Huang gained industry experience at LSI Logic and Advanced Micro Devices (AMD), building a foundation in semiconductor technology and business leadership.

Founding Nvidia and Early Struggles

In 1993, at the age of 30, Huang co-founded Nvidia with Chris Malachowsky and Curtis Priem. The company’s inception was humble—its first meetings took place in a local Denny’s restaurant. The early years were marked by intense challenges and uncertainty. Nvidia’s initial focus on graphics accelerator chips nearly led to its demise, with the company surviving on a critical $5 million investment from Sega. By 1997, Nvidia was just a month away from running out of payroll funds before the release of the RIVA 128 chip turned its fortunes around.

Huang’s leadership style was forged in these difficult times. He often reminded his team, “Our company is thirty days from going out of business,” a mantra that underscored the urgency and resilience required to survive in Silicon Valley’s fast-paced environment. Huang has credited these hardships as essential to his growth as a leader and to Nvidia’s eventual success.

Transforming the Tech Landscape

Under Huang’s stewardship, Nvidia pioneered the invention of the Graphics Processing Unit (GPU) in 1999, revolutionizing computer graphics and catalyzing the growth of the PC gaming industry. More recently, Nvidia has become a central player in the rise of artificial intelligence (AI) and accelerated computing, with its hardware and software platforms powering breakthroughs in data centers, autonomous vehicles, and generative AI.

Huang’s vision and execution have earned him widespread recognition, including election to the National Academy of Engineering, the Semiconductor Industry Association’s Robert N. Noyce Award, the IEEE Founder’s Medal, and inclusion in TIME magazine’s list of the 100 most influential people.

read more
Quote: Yann Lecun

Quote: Yann Lecun

“Most of the infrastructure cost for AI is for inference: serving AI assistants to billions of people.”
— Yann LeCun, VP & Chief AI Scientist at Meta

Yann LeCun made this comment in response to the sharp drop in Nvidia’s share price on January 27, 2024, following the launch of Deepseek R1, a new AI model developed by Deepseek AI. This model was reportedly trained at a fraction of the cost incurred by Hyperscalers like OpenAI, Anthropic, and Google DeepMind, raising questions about whether Nvidia’s dominance in AI compute was at risk.

The market reaction stemmed from speculation that the training costs of cutting-edge AI models—previously seen as a key driver of Nvidia’s GPU demand—could decrease significantly with more efficient methods. However, LeCun pointed out that most AI infrastructure costs come not from training but from inference, the process of running AI models at scale to serve billions of users. This suggests that Nvidia’s long-term demand may remain strong, as inference still relies heavily on high-performance GPUs.

LeCun’s view aligned with analyses from key AI investors and industry leaders. He supported the argument made by Antoine Blondeau, co-founder of Alpha Intelligence Capital, who described Nvidia’s stock drop as “vastly overblown” and “NOT a ‘Sputnik moment’”, referencing the concern that Nvidia’s market position was insecure. Additionally, Jonathan Ross, founder of Groq, shared a video titled “Why $500B isn’t enough for AI,” explaining why AI compute demand remains insatiable despite efficiency gains.

This discussion underscores a critical aspect of AI economics: while training costs may drop with better algorithms and hardware, the sheer scale of inference workloads—powering AI assistants, chatbots, and generative models for billions of users—remains a dominant and growing expense. This supports the case for sustained investment in AI infrastructure, particularly in Nvidia’s GPUs, which continue to be the gold standard for inference at scale.

read more
Infographic: Four critical DeepSeek enablers

Infographic: Four critical DeepSeek enablers

The DeepSeek team has introduced several high-impact changes to Large Language Model (LLM) architecture to enhance performance and efficiency:

  1. Multi-Head Latent Attention (MLA): This mechanism enables the model to process multiple facets of input data simultaneously, improving both efficiency and performance. MLA reduces the memory required to compute a transformer’s attention by a factor of 7.5x to 20x, a breakthrough that makes large-scale AI applications more feasible. Unlike Flash Attention, which improves data organization in memory, MLA compresses the KV cache into a lower-dimensional space, significantly reducing memory usage—down to 5% to 13% of traditional attention mechanisms—while maintaining performance.
  2. Mixture-of-Experts (MoE) Architecture: DeepSeek employs an MoE system that activates only a subset of its total parameters during any given task. For instance, in DeepSeek-V3, only 37 billion out of 671 billion parameters are active at a time, significantly reducing computational costs. This approach enhances efficiency and aligns with the trend of making AI models more compute-light, allowing freed-up GPU resources to be allocated to multi-modal processing, spatial intelligence, or genomic analysis. MoE models, as also leveraged by Mistral and other leading AI labs, allow for scalability while keeping inference costs manageable.
  3. FP8 Floating Point Precision: To enhance computational efficiency, DeepSeek-V3 utilizes FP8 floating point precision during training, which helps in reducing memory usage and accelerating computation. This follows a broader trend in AI to optimize training methodologies, potentially influencing the approach taken by U.S.-based LLM providers. Given China’s restricted access to high-end GPUs due to U.S. export controls, optimizations like FP8 and MLA are critical in overcoming hardware limitations.
  4. DeepSeek-R1 and Test-Time Compute Capabilities: DeepSeek-R1 is a model that leverages reinforcement learning (RL) to enable test-time compute, significantly improving reasoning capabilities. The model was trained using an innovative RL strategy, incorporating fine-tuned Chain of Thought (CoT) data and supervised fine-tuning (SFT) data across multiple domains. Notably, DeepSeek demonstrated that any sufficiently powerful LLM can be transformed into a high-performance reasoning model using only 800k curated training samples. This technique allows for rapid adaptation of smaller models, such as Qwen and LLaMa-70b, into competitive reasoners.
  5. Distillation to Smaller Models: The team has developed distilled versions of their models, such as DeepSeek-R1-Distill, which are fine-tuned on synthetic data generated by larger models. These distilled models contain fewer parameters, making them more efficient while retaining significant capabilities. DeepSeek’s ability to achieve comparable reasoning performance at a fraction of the cost of OpenAI’s models (5% of the cost, according to Pelliccione) has disrupted the AI landscape.

The Impact of Open-Source Models:

DeepSeek’s success highlights a fundamental shift in AI development. Traditionally, leading-edge models have been closed-source and controlled by Western AI firms like OpenAI, Google, and Anthropic. However, DeepSeek’s approach, leveraging open-source components while innovating on training efficiency, has disrupted this dynamic. Pelliccione notes that DeepSeek now offers similar performance to OpenAI at just 5% of the cost, making high-quality AI more accessible. This shift pressures proprietary AI companies to rethink their business models and embrace greater openness.

Challenges and Innovations in the Chinese AI Ecosystem:

China’s AI sector faces major constraints, particularly in access to high-performance GPUs due to U.S. export restrictions. Yet, Chinese companies like DeepSeek have turned these challenges into strengths through aggressive efficiency improvements. MLA and FP8 precision optimizations exemplify how innovation can offset hardware limitations. Furthermore, Chinese AI firms, historically focused on scaling existing tech, are now contributing to fundamental advancements in AI research, signaling a shift towards deeper innovation.

The Future of AI Control and Adaptation:

DeepSeek-R1’s approach to training AI reasoners poses a challenge to traditional AI control mechanisms. Since reasoning capabilities can now be transferred to any capable model with fewer than a million curated samples, AI governance must extend beyond compute resources and focus on securing datasets, training methodologies, and deployment platforms. OpenAI has previously obscured Chain of Thought traces to prevent leakage, but DeepSeek’s open-weight release and published RL techniques have made such restrictions ineffective.

Broader Industry Context:

  • DeepSeek benefits from Western open-source AI developments, particularly Meta’s LLama model disclosures, which provided a foundation for its advancements. However, DeepSeek’s success also demonstrates that China is shifting from scaling existing technology to innovating at the frontier.
  • Open-source models like DeepSeek will see widespread adoption for enterprise and research applications, though Western businesses are unlikely to build their consumer apps on a Chinese API.
  • The AI innovation cycle is exceptionally fast, with breakthroughs assessed daily or weekly. DeepSeek’s advances are part of a rapidly evolving competitive landscape dominated by U.S. big tech players like OpenAI, Google, Microsoft, and Meta, who continue to push for productization and revenue generation. Meanwhile, Chinese AI firms, despite hardware and data limitations, are innovating at an accelerated pace and have proven capable of challenging OpenAI’s dominance.

These innovations collectively contribute to more efficient and effective LLMs, balancing performance with resource utilization while shaping the future of AI model development.

Sources: Global Advisors, Jack Clark – Anthropic, Antoine Blondeau, Alberto Pelliccione, infoq.com, medium.com, en.wikipedia.org, arxiv.org

read more
Quote: Jack Clark

Quote: Jack Clark

“The most surprising part of DeepSeek-R1 is that it only takes ~800k samples of ‘good’ RL reasoning to convert other models into RL-reasoners. Now that DeepSeek-R1 is available people will be able to refine samples out of it to convert any other model into an RL reasoner.” – Jack Clark, Anthropic

Jack Clark, Co-founder of Anthropic, co-chair of the AI Index at Stanford University, co-chair of OECD working group on AI & Compute, shed light on the significance of DeepSeek-R1, a revolutionary AI reasoning model developed by China’s DeepSeek team. In an article posted in his newsletter on the 27th January 2025, Clark highlighted that it only takes approximately 800k samples of “good” RL (Reinforcement Learning) reasoning to convert other models into RL-reasoners.

The Power of Fine-Tuning

DeepSeek-R1 is not just a powerful AI model; it also provides a framework for fine-tuning existing models to enhance their reasoning capabilities. By leveraging the 800k samples curated with DeepSeek-R1, researchers can refine any other model into an RL reasoner. This approach has been demonstrated by fine-tuning open-source models like Qwen and Llama using the same dataset.

Implications for AI Policy

The release of DeepSeek-R1 has significant implications for AI policy and control. As Clark notes, if you need fewer than a million samples to convert any model into a “thinker,” it becomes much harder to control AI systems. This is because the valuable data, including chains of thought from reasoning models, can be leaked or shared openly.

A New Era in AI Development

The availability of DeepSeek-R1 and its associated techniques has created a new era in AI development. With an open weight model floating around the internet, researchers can now bootstrap any other sufficiently powerful base model into being an AI reasoner. This has the potential to accelerate AI progress worldwide.

Key Takeaways:

  • Fine-tuning is key : DeepSeek-R1 demonstrates that fine-tuning existing models with a small amount of data (800k samples) can significantly enhance their reasoning capabilities.
  • Open-source and accessible : The model and its techniques are now available for anyone to use, making it easier for researchers to develop powerful AI reasoners.
  • Implications for control : The release of DeepSeek-R1 highlights the challenges of controlling AI systems, as valuable data can be leaked or shared openly.

Conclusion

DeepSeek-R1 has marked a significant milestone in AI development, showcasing the power of fine-tuning and open-source collaboration. As researchers continue to build upon this work, we can expect to see even more advanced AI models emerge, with far-reaching implications for various industries and applications.

read more
Quote: Marc Andreessen

Quote: Marc Andreessen

“DeepSeek-R1 is AI’s Sputnik moment.” – Marc Andreessen, Andreesen Horowitz

In a 27th January 2025 X statement that sent shockwaves through the tech community, venture capitalist Marc Andreessen declared that DeepSeek’s R1 AI reasoning model is “AI’s Sputnik moment.” This analogy draws parallels between China’s breakthrough in artificial intelligence and the Soviet Union’s historic achievement of launching the first satellite into orbit in 1957.

The Rise of DeepSeek-R1

DeepSeek, a Chinese AI lab, has made headlines with its open-source release of R1, a revolutionary AI reasoning model that is not only more cost-efficient but also poses a significant threat to the dominance of Western tech giants. The model’s ability to reduce compute requirements by half without sacrificing accuracy has sent shockwaves through the industry.

A New Era in AI

The release of DeepSeek-R1 marks a turning point in the AI arms race, as it challenges the long-held assumption that only a select few companies can compete in this space. By making its research open-source, DeepSeek is empowering anyone to build their own version of R1 and tailor it to their needs.

Implications for Megacap Stocks

The success of DeepSeek-R1 has significant implications for megacap stocks like Microsoft, Alphabet, and Amazon, which have long relied on proprietary AI models to maintain their technological advantage. The pen-source nature of R1 threatens to wipe out this advantage, potentially disrupting the business models of these tech giants.

Nvidia’s Nightmare

The news comes as a blow to Nvidia CEO Jensen Huang, who is ramping up production of his Blackwell microchip, a more advanced version of his industry-leading Hopper series H100s. The chip controls 90% of the AI semiconductor market, but R1’s ability to reduce compute requirements may render these chips less essential.

A New Era of Innovation

Perplexity AI founder Aravind Srinivas praised DeepSeek’s team for catching up to the West by employing clever solutions, including switching from binary encoding to floating point 8. This innovation not only reduces costs but also demonstrates that China is no longer just a copycat, but a leader in AI innovation.

read more
Quote: Jeffrey Emanuel

Quote: Jeffrey Emanuel

“With R1, DeepSeek essentially cracked one of the holy grails of AI: getting models to reason step-by-step without relying on massive supervised datasets.” – Jeffrey Emanuel

Jeffrey Emanuel’s statement (“The Short Case for Nvidia Stock” – 25th January 2025) highlights a groundbreaking achievement in AI with DeepSeek’s R1 model, which has made significant strides in enabling step-by-step reasoning without the traditional reliance on vast supervised datasets:

  1. Innovation Through Reinforcement Learning (RL):
    • The R1 model employs reinforcement learning, a method where models learn through trial and error with feedback. This approach reduces the dependency on large labeled datasets typically required for training, making it more efficient and accessible.
  2. Advanced Reasoning Capabilities:
    • R1 excels in tasks requiring logical inference and mathematical problem-solving. Its ability to demonstrate step-by-step reasoning is crucial for complex decision-making processes, applicable across various industries from autonomous systems to intricate problem-solving tasks.
  3. Efficiency and Accessibility:
    • By utilizing RL and knowledge distillation techniques, R1 efficiently transfers learning to smaller models. This democratizes AI technology, allowing global researchers and developers to innovate without proprietary barriers, thus expanding the reach of advanced AI solutions.
  4. Impact on Data-Scarce Industries:
    • The model’s capability to function with limited data is particularly beneficial in sectors like medicine and finance, where labeled data is scarce due to privacy concerns or high costs. This opens doors for more ethical and feasible AI applications in these fields.
  5. Competitive Landscape and Innovation:
    • R1 positions itself as a competitor to models like OpenAI’s o1, signaling a shift towards accessible AI technology. This fosters competition and encourages other companies to innovate similarly, driving advancements across the AI landscape.

In essence, DeepSeek’s R1 model represents a significant leap in AI efficiency and accessibility, offering profound implications for various industries by reducing data dependency and enhancing reasoning capabilities.

read more

Download brochure

Introduction brochure

What we do, case studies and profiles of some of our amazing team.

Download

Our latest podcasts on Spotify

Sign up for our newsletters - free

Global Advisors | Quantified Strategy Consulting