Select Page

Global Advisors | Quantified Strategy Consulting

training
Quote: Jack Clark – Import AI

Quote: Jack Clark – Import AI

“Since 2020, we have seen a 600 000x increase in the computational scale of decentralized training projects, for an implied growth rate of about 20x/year.” – Jack Clark – Import AI

Jack Clark on Exponential Growth in Decentralized AI Training

The Quote and Its Context

Jack Clark’s statement about the 600,000x increase in computational scale for decentralized training projects over approximately five years (2020-2025) represents a striking observation about the democratization of frontier AI development.1,2,3,4 This 20x annual growth rate reflects one of the most significant shifts in the technological and political economy of artificial intelligence: the transition from centralized, proprietary training architectures controlled by a handful of well-capitalized labs toward distributed, federated approaches that enable loosely coordinated collectives to pool computational resources globally.

Jack Clark: Architect of AI Governance Thinking

Jack Clark is the Head of Policy at Anthropic and one of the most influential voices shaping how we think about AI development, governance, and the distribution of technological power.1 His trajectory uniquely positions him to observe this transformation. Clark co-authored the original GPT-2 paper at OpenAI in 2019, a moment he now reflects on as pivotal—not merely for the model’s capabilities, but for what it revealed about scaling laws: the discovery that larger models trained on more data would exhibit predictably superior performance across diverse tasks, even without task-specific optimization.1

This insight proved prophetic. Clark recognized that GPT-2 was “a sketch of the future”—a partial glimpse of what would emerge through scaling. The paper’s modest performance advances on seven of eight tested benchmarks, achieved without narrow task optimization, suggested something fundamental about how neural networks could be made more generally capable.1 What followed validated his foresight: GPT-3, instruction-tuned variants, ChatGPT, Claude, and the subsequent explosion of large language models all emerged from the scaling principles Clark and colleagues had identified.

However, Clark’s thinking has evolved substantially since those early days. Reflecting in 2024, five years after GPT-2’s release, he acknowledged that while his team had anticipated many malicious uses of advanced language models, they failed to predict the most disruptive actual impact: the generation of low-grade synthetic content driven by economic incentives rather than malicious intent.1 This humility about the limits of foresight informs his current policy positions.

The Political Economy of Decentralized Training

Clark’s observation about the 600,000x scaling in decentralized training projects is not merely a technical metric—it is a statement about power distribution. Currently, the frontier of AI capability depends on the ability to concentrate vast amounts of computational resources in physically centralized clusters. Companies like Anthropic, OpenAI, and hyperscalers like Google and Meta control this concentrated compute, which has enabled governments and policymakers to theoretically monitor and regulate AI development through chokepoints: controlling access to advanced semiconductors, tracking large training clusters, and licensing centralized development entities.3,4

Decentralized training disrupts this assumption entirely. If computational resources can be pooled across hundreds of loosely federated organizations and individuals globally—each contributing smaller clusters of GPUs or other accelerators—then the frontier of AI capability becomes distributed across many actors rather than concentrated in a few.3,4 This changes everything about AI policy, which has largely been built on the premise of controllable centralization.

Recent proof-of-concepts underscore this trajectory:

  • Prime Intellect’s INTELLECT-1 (10 billion parameters) demonstrated that decentralized training at scale was technically feasible, a threshold achievement because it showed loosely coordinated collectives could match capabilities that previously required single-company efforts.3,9

  • INTELLECT-2 (32 billion parameters) followed, designed to compete with modern reasoning models through distributed training, suggesting that decentralized approaches were not merely proof-of-concept but could produce competitive frontier-grade systems.4

  • DiLoCoX, an advancement on DeepMind’s DiLoCo technology, demonstrated a 357x speedup in distributed training while achieving model convergence across decentralized clusters with minimal network bandwidth (1Gbps)—a crucial breakthrough because communication overhead had previously been the limiting factor in distributed training.2

The implied growth rate of 20x annually suggests an acceleration curve where technical barriers to decentralized training are falling faster than regulatory frameworks or policy interventions can adapt.

Leading Theorists and Intellectual Lineages

Scaling Laws and the Foundations

The intellectual foundation for understanding exponential growth in AI capabilities rests on the work of researchers who formalized scaling laws. While Clark and colleagues at OpenAI contributed to this work through GPT-2 and subsequent research, the broader field—including contributions from Jared Kaplan, Dario Amodei, and others at Anthropic—established that model performance scales predictably with increases in parameters, data, and compute.1 These scaling laws create the mathematical logic that enables decentralized systems to be competitive: a 32-billion-parameter model trained via distributed methods can approach the capabilities of centralized training at similar scales.

Political Economy and Technological Governance

Clark’s thinking is situated within broader intellectual traditions examining how technology distributes power. His emphasis on the “political economy” of AI reflects influence from scholars and policymakers concerned with how technological architectures embed power relationships. The notion that decentralized training redistributes who can develop frontier AI systems draws on longstanding traditions in technology policy examining how architectural choices (centralized vs. distributed systems) have political consequences.

His advocacy for polycentric governance—distributing decision-making about AI behavior across multiple scales from individuals to platforms to regulatory bodies—reflects engagement with governance theory emphasizing that monocentric control is often less resilient and responsive than systems with distributed decision-making authority.5

The “Regulatory Markets” Framework

Clark has articulated the need for governments to systematically monitor the societal impact and diffusion of AI technologies, a position he advanced through the concept of “Regulatory Markets”—market-driven mechanisms for monitoring AI systems. This framework acknowledges that traditional command-and-control regulation may be poorly suited to rapidly evolving technological domains and that measurement and transparency might be more foundational than licensing or restriction.1 This connects to broader work in regulatory innovation and adaptive governance.

The Implications of Exponential Decentralization

The 600,000x growth over five years, if sustained or accelerated, implies several transformative consequences:

On AI Policy: Traditional approaches to AI governance that assume centralized training clusters and a small number of frontier labs become obsolete. Export controls on advanced semiconductors, for instance, become less effective if 100 organizations in 50 countries can collectively train competitive models using previous-generation chips.3,4

On Open-Source Development: The growth depends crucially on the availability of open-weight models (like Meta’s LLaMA or DeepSeek) and accessible software stacks (like Prime.cpp) that enable distributed inference and fine-tuning.4 The democratization of capability is inseparable from the proliferation of open-source infrastructure.

On Sovereignty and Concentration: Clark frames this as essential for “sovereign AI”—the ability for nations, organizations, and individuals to develop and deploy capable AI systems without dependence on centralized providers. However, this same decentralization could enable the rapid proliferation of systems with limited safety testing or alignment work.4

On Clark’s Own Policy Evolution: Notably, Clark has found himself increasingly at odds with AI safety and policy positions he previously held or was associated with. He expresses skepticism toward licensing regimes for AI development, restrictions on open-source model deployment, and calls for worldwide development pauses—positions that, he argues, would create concentrated power in the present to prevent speculative future risks.1 Instead, he remains confident in the value of systematic societal impact monitoring and measurement, which he has championed through his work at Anthropic and in policy forums like the Bletchley and Seoul AI safety summits.1

The Unresolved Tension

The exponential growth in decentralized training capacity creates a central tension in AI governance: it democratizes access to frontier capabilities but potentially distributes both beneficial and harmful applications more widely. Clark’s quote and his broader work reflect an intellectual reckoning with this tension—recognizing that attempts to maintain centralized control through policy and export restrictions may be both technically infeasible and politically counterproductive, yet that some form of measurement and transparency remains essential for democratic societies to understand and respond to AI’s societal impacts.

References

1. https://jack-clark.net/2024/06/03/import-ai-375-gpt-2-five-years-later-decentralized-training-new-ways-of-thinking-about-consciousness-and-ai/

2. https://jack-clark.net/2025/06/30/import-ai-418-100b-distributed-training-run-decentralized-robots-ai-myths/

3. https://jack-clark.net/2024/10/14/import-ai-387-overfitting-vs-reasoning-distributed-training-runs-and-facebooks-new-video-models/

4. https://jack-clark.net/2025/04/21/import-ai-409-huawei-trains-a-model-on-8000-ascend-chips-32b-decentralized-training-run-and-the-era-of-experience-and-superintelligence/

5. https://importai.substack.com/p/import-ai-413-40b-distributed-training

6. https://www.youtube.com/watch?v=uRXrP_nfTSI

7. https://importai.substack.com/p/import-ai-375-gpt-2-five-years-later/comments

8. https://jack-clark.net

9. https://jack-clark.net/2024/12/03/import-ai-393-10b-distributed-training-run-china-vs-the-chip-embargo-and-moral-hazards-of-ai-development/

10. https://www.lesswrong.com/posts/iFrefmWAct3wYG7vQ/ai-labs-statements-on-governance

read more
Quote: Yann Lecun

Quote: Yann Lecun

“Most of the infrastructure cost for AI is for inference: serving AI assistants to billions of people.”
— Yann LeCun, VP & Chief AI Scientist at Meta

Yann LeCun made this comment in response to the sharp drop in Nvidia’s share price on January 27, 2024, following the launch of Deepseek R1, a new AI model developed by Deepseek AI. This model was reportedly trained at a fraction of the cost incurred by Hyperscalers like OpenAI, Anthropic, and Google DeepMind, raising questions about whether Nvidia’s dominance in AI compute was at risk.

The market reaction stemmed from speculation that the training costs of cutting-edge AI models—previously seen as a key driver of Nvidia’s GPU demand—could decrease significantly with more efficient methods. However, LeCun pointed out that most AI infrastructure costs come not from training but from inference, the process of running AI models at scale to serve billions of users. This suggests that Nvidia’s long-term demand may remain strong, as inference still relies heavily on high-performance GPUs.

LeCun’s view aligned with analyses from key AI investors and industry leaders. He supported the argument made by Antoine Blondeau, co-founder of Alpha Intelligence Capital, who described Nvidia’s stock drop as “vastly overblown” and “NOT a ‘Sputnik moment’”, referencing the concern that Nvidia’s market position was insecure. Additionally, Jonathan Ross, founder of Groq, shared a video titled “Why $500B isn’t enough for AI,” explaining why AI compute demand remains insatiable despite efficiency gains.

This discussion underscores a critical aspect of AI economics: while training costs may drop with better algorithms and hardware, the sheer scale of inference workloads—powering AI assistants, chatbots, and generative models for billions of users—remains a dominant and growing expense. This supports the case for sustained investment in AI infrastructure, particularly in Nvidia’s GPUs, which continue to be the gold standard for inference at scale.

read more
Infographic: Four critical DeepSeek enablers

Infographic: Four critical DeepSeek enablers

The DeepSeek team has introduced several high-impact changes to Large Language Model (LLM) architecture to enhance performance and efficiency:

  1. Multi-Head Latent Attention (MLA): This mechanism enables the model to process multiple facets of input data simultaneously, improving both efficiency and performance. MLA reduces the memory required to compute a transformer’s attention by a factor of 7.5x to 20x, a breakthrough that makes large-scale AI applications more feasible. Unlike Flash Attention, which improves data organization in memory, MLA compresses the KV cache into a lower-dimensional space, significantly reducing memory usage—down to 5% to 13% of traditional attention mechanisms—while maintaining performance.
  2. Mixture-of-Experts (MoE) Architecture: DeepSeek employs an MoE system that activates only a subset of its total parameters during any given task. For instance, in DeepSeek-V3, only 37 billion out of 671 billion parameters are active at a time, significantly reducing computational costs. This approach enhances efficiency and aligns with the trend of making AI models more compute-light, allowing freed-up GPU resources to be allocated to multi-modal processing, spatial intelligence, or genomic analysis. MoE models, as also leveraged by Mistral and other leading AI labs, allow for scalability while keeping inference costs manageable.
  3. FP8 Floating Point Precision: To enhance computational efficiency, DeepSeek-V3 utilizes FP8 floating point precision during training, which helps in reducing memory usage and accelerating computation. This follows a broader trend in AI to optimize training methodologies, potentially influencing the approach taken by U.S.-based LLM providers. Given China’s restricted access to high-end GPUs due to U.S. export controls, optimizations like FP8 and MLA are critical in overcoming hardware limitations.
  4. DeepSeek-R1 and Test-Time Compute Capabilities: DeepSeek-R1 is a model that leverages reinforcement learning (RL) to enable test-time compute, significantly improving reasoning capabilities. The model was trained using an innovative RL strategy, incorporating fine-tuned Chain of Thought (CoT) data and supervised fine-tuning (SFT) data across multiple domains. Notably, DeepSeek demonstrated that any sufficiently powerful LLM can be transformed into a high-performance reasoning model using only 800k curated training samples. This technique allows for rapid adaptation of smaller models, such as Qwen and LLaMa-70b, into competitive reasoners.
  5. Distillation to Smaller Models: The team has developed distilled versions of their models, such as DeepSeek-R1-Distill, which are fine-tuned on synthetic data generated by larger models. These distilled models contain fewer parameters, making them more efficient while retaining significant capabilities. DeepSeek’s ability to achieve comparable reasoning performance at a fraction of the cost of OpenAI’s models (5% of the cost, according to Pelliccione) has disrupted the AI landscape.

The Impact of Open-Source Models:

DeepSeek’s success highlights a fundamental shift in AI development. Traditionally, leading-edge models have been closed-source and controlled by Western AI firms like OpenAI, Google, and Anthropic. However, DeepSeek’s approach, leveraging open-source components while innovating on training efficiency, has disrupted this dynamic. Pelliccione notes that DeepSeek now offers similar performance to OpenAI at just 5% of the cost, making high-quality AI more accessible. This shift pressures proprietary AI companies to rethink their business models and embrace greater openness.

Challenges and Innovations in the Chinese AI Ecosystem:

China’s AI sector faces major constraints, particularly in access to high-performance GPUs due to U.S. export restrictions. Yet, Chinese companies like DeepSeek have turned these challenges into strengths through aggressive efficiency improvements. MLA and FP8 precision optimizations exemplify how innovation can offset hardware limitations. Furthermore, Chinese AI firms, historically focused on scaling existing tech, are now contributing to fundamental advancements in AI research, signaling a shift towards deeper innovation.

The Future of AI Control and Adaptation:

DeepSeek-R1’s approach to training AI reasoners poses a challenge to traditional AI control mechanisms. Since reasoning capabilities can now be transferred to any capable model with fewer than a million curated samples, AI governance must extend beyond compute resources and focus on securing datasets, training methodologies, and deployment platforms. OpenAI has previously obscured Chain of Thought traces to prevent leakage, but DeepSeek’s open-weight release and published RL techniques have made such restrictions ineffective.

Broader Industry Context:

  • DeepSeek benefits from Western open-source AI developments, particularly Meta’s LLama model disclosures, which provided a foundation for its advancements. However, DeepSeek’s success also demonstrates that China is shifting from scaling existing technology to innovating at the frontier.
  • Open-source models like DeepSeek will see widespread adoption for enterprise and research applications, though Western businesses are unlikely to build their consumer apps on a Chinese API.
  • The AI innovation cycle is exceptionally fast, with breakthroughs assessed daily or weekly. DeepSeek’s advances are part of a rapidly evolving competitive landscape dominated by U.S. big tech players like OpenAI, Google, Microsoft, and Meta, who continue to push for productization and revenue generation. Meanwhile, Chinese AI firms, despite hardware and data limitations, are innovating at an accelerated pace and have proven capable of challenging OpenAI’s dominance.

These innovations collectively contribute to more efficient and effective LLMs, balancing performance with resource utilization while shaping the future of AI model development.

Sources: Global Advisors, Jack Clark – Anthropic, Antoine Blondeau, Alberto Pelliccione, infoq.com, medium.com, en.wikipedia.org, arxiv.org

read more
Quote: Jack Clark

Quote: Jack Clark

“The most surprising part of DeepSeek-R1 is that it only takes ~800k samples of ‘good’ RL reasoning to convert other models into RL-reasoners. Now that DeepSeek-R1 is available people will be able to refine samples out of it to convert any other model into an RL reasoner.” – Jack Clark, Anthropic

Jack Clark, Co-founder of Anthropic, co-chair of the AI Index at Stanford University, co-chair of OECD working group on AI & Compute, shed light on the significance of DeepSeek-R1, a revolutionary AI reasoning model developed by China’s DeepSeek team. In an article posted in his newsletter on the 27th January 2025, Clark highlighted that it only takes approximately 800k samples of “good” RL (Reinforcement Learning) reasoning to convert other models into RL-reasoners.

The Power of Fine-Tuning

DeepSeek-R1 is not just a powerful AI model; it also provides a framework for fine-tuning existing models to enhance their reasoning capabilities. By leveraging the 800k samples curated with DeepSeek-R1, researchers can refine any other model into an RL reasoner. This approach has been demonstrated by fine-tuning open-source models like Qwen and Llama using the same dataset.

Implications for AI Policy

The release of DeepSeek-R1 has significant implications for AI policy and control. As Clark notes, if you need fewer than a million samples to convert any model into a “thinker,” it becomes much harder to control AI systems. This is because the valuable data, including chains of thought from reasoning models, can be leaked or shared openly.

A New Era in AI Development

The availability of DeepSeek-R1 and its associated techniques has created a new era in AI development. With an open weight model floating around the internet, researchers can now bootstrap any other sufficiently powerful base model into being an AI reasoner. This has the potential to accelerate AI progress worldwide.

Key Takeaways:

  • Fine-tuning is key : DeepSeek-R1 demonstrates that fine-tuning existing models with a small amount of data (800k samples) can significantly enhance their reasoning capabilities.
  • Open-source and accessible : The model and its techniques are now available for anyone to use, making it easier for researchers to develop powerful AI reasoners.
  • Implications for control : The release of DeepSeek-R1 highlights the challenges of controlling AI systems, as valuable data can be leaked or shared openly.

Conclusion

DeepSeek-R1 has marked a significant milestone in AI development, showcasing the power of fine-tuning and open-source collaboration. As researchers continue to build upon this work, we can expect to see even more advanced AI models emerge, with far-reaching implications for various industries and applications.

read more
Quote: Marc Andreessen

Quote: Marc Andreessen

“DeepSeek-R1 is AI’s Sputnik moment.” – Marc Andreessen, Andreesen Horowitz

In a 27th January 2025 X statement that sent shockwaves through the tech community, venture capitalist Marc Andreessen declared that DeepSeek’s R1 AI reasoning model is “AI’s Sputnik moment.” This analogy draws parallels between China’s breakthrough in artificial intelligence and the Soviet Union’s historic achievement of launching the first satellite into orbit in 1957.

The Rise of DeepSeek-R1

DeepSeek, a Chinese AI lab, has made headlines with its open-source release of R1, a revolutionary AI reasoning model that is not only more cost-efficient but also poses a significant threat to the dominance of Western tech giants. The model’s ability to reduce compute requirements by half without sacrificing accuracy has sent shockwaves through the industry.

A New Era in AI

The release of DeepSeek-R1 marks a turning point in the AI arms race, as it challenges the long-held assumption that only a select few companies can compete in this space. By making its research open-source, DeepSeek is empowering anyone to build their own version of R1 and tailor it to their needs.

Implications for Megacap Stocks

The success of DeepSeek-R1 has significant implications for megacap stocks like Microsoft, Alphabet, and Amazon, which have long relied on proprietary AI models to maintain their technological advantage. The pen-source nature of R1 threatens to wipe out this advantage, potentially disrupting the business models of these tech giants.

Nvidia’s Nightmare

The news comes as a blow to Nvidia CEO Jensen Huang, who is ramping up production of his Blackwell microchip, a more advanced version of his industry-leading Hopper series H100s. The chip controls 90% of the AI semiconductor market, but R1’s ability to reduce compute requirements may render these chips less essential.

A New Era of Innovation

Perplexity AI founder Aravind Srinivas praised DeepSeek’s team for catching up to the West by employing clever solutions, including switching from binary encoding to floating point 8. This innovation not only reduces costs but also demonstrates that China is no longer just a copycat, but a leader in AI innovation.

read more
Quote: Jeffrey Emanuel

Quote: Jeffrey Emanuel

“With R1, DeepSeek essentially cracked one of the holy grails of AI: getting models to reason step-by-step without relying on massive supervised datasets.” – Jeffrey Emanuel

Jeffrey Emanuel’s statement (“The Short Case for Nvidia Stock” – 25th January 2025) highlights a groundbreaking achievement in AI with DeepSeek’s R1 model, which has made significant strides in enabling step-by-step reasoning without the traditional reliance on vast supervised datasets:

  1. Innovation Through Reinforcement Learning (RL):
    • The R1 model employs reinforcement learning, a method where models learn through trial and error with feedback. This approach reduces the dependency on large labeled datasets typically required for training, making it more efficient and accessible.
  2. Advanced Reasoning Capabilities:
    • R1 excels in tasks requiring logical inference and mathematical problem-solving. Its ability to demonstrate step-by-step reasoning is crucial for complex decision-making processes, applicable across various industries from autonomous systems to intricate problem-solving tasks.
  3. Efficiency and Accessibility:
    • By utilizing RL and knowledge distillation techniques, R1 efficiently transfers learning to smaller models. This democratizes AI technology, allowing global researchers and developers to innovate without proprietary barriers, thus expanding the reach of advanced AI solutions.
  4. Impact on Data-Scarce Industries:
    • The model’s capability to function with limited data is particularly beneficial in sectors like medicine and finance, where labeled data is scarce due to privacy concerns or high costs. This opens doors for more ethical and feasible AI applications in these fields.
  5. Competitive Landscape and Innovation:
    • R1 positions itself as a competitor to models like OpenAI’s o1, signaling a shift towards accessible AI technology. This fosters competition and encourages other companies to innovate similarly, driving advancements across the AI landscape.

In essence, DeepSeek’s R1 model represents a significant leap in AI efficiency and accessibility, offering profound implications for various industries by reducing data dependency and enhancing reasoning capabilities.

read more

Download brochure

Introduction brochure

What we do, case studies and profiles of some of our amazing team.

Download

Our latest podcasts on Spotify

Sign up for our newsletters - free

Global Advisors | Quantified Strategy Consulting