Select Page

Global Advisors | Quantified Strategy Consulting

AI
Quote: Kristalina Georgieva – Managing Director, IMF

Quote: Kristalina Georgieva – Managing Director, IMF

“What is being eliminated [by AI] are often tasks done by new entries into the labor force – young people. Conversely, people with higher skills get better pay, spend more locally, and that ironically increases demand for low-skill jobs. This is bad news for recent … graduates.” – Kristalina Georgieva – Managing Director, IMF

Kristalina Georgieva, Managing Director of the International Monetary Fund (IMF), delivered this stark observation during a World Economic Forum Town Hall in Davos on 23 January 2026, amid discussions on ‘Dilemmas around Growth’. Speaking as AI’s rapid adoption accelerates, she highlighted a dual dynamic: the elimination of routine entry-level tasks traditionally filled by young graduates, coupled with productivity gains for higher-skilled workers that paradoxically boost demand for low-skill service roles.1,2,5

Context of the Quote

Georgieva’s remarks form part of the IMF’s latest research, which estimates that AI will impact 40% of global jobs and 60% in advanced economies through enhancement, elimination, or transformation.1,3 She described AI as a ‘tsunami hitting the labour market’, emphasising its immediate effects: one in ten jobs in advanced economies already demands new skills, often IT-related, creating wage pressures on the middle class while entry-level positions vanish.1,2,5 This ‘accordion of opportunities’ sees high-skill workers earning more, spending locally, and sustaining low-skill jobs like hospitality, but leaves recent graduates struggling to enter the workforce.5

Backstory on Kristalina Georgieva

Born in 1953 in Sofia, Bulgaria, Kristalina Georgieva rose from communist-era academia to global economic leadership. She earned a PhD in economic modelling and worked as an economist before Bulgaria’s democratic transition. Joining the World Bank in 1993, she climbed to roles including Chief Economist for Europe and Central Asia, then Commissioner for International Cooperation, Humanitarian Aid, and Crisis Response at the European Commission (2010-2014). Appointed IMF Managing Director in 2019, she navigated the COVID-19 crisis, steering over USD 1 trillion in lending and advocating fiscal resilience. Georgieva’s tenure has focused on inequality, climate finance, and digital transformation, making her a authoritative voice on AI’s socioeconomic implications.3,5

Leading Theorists on AI and Labour Markets

The theoretical foundations of Georgieva’s analysis trace to pioneering economists dissecting technology’s job impacts.

  • David Autor: MIT economist whose ‘task-based framework’ (with Frank Levy) posits jobs as bundles of tasks, some automatable. Autor’s research shows AI targets routine cognitive tasks, polarising labour markets by hollowing out middle-skill roles while boosting high- and low-skill demand-a ‘polarisation’ mirroring Georgieva’s entry-level concerns.3
  • Erik Brynjolfsson and Andrew McAfee: MIT scholars and authors of The Second Machine Age, they argue AI enables ‘recombinant innovation’, automating cognitive work unlike prior mechanisation. Their work warns of ‘winner-takes-all’ dynamics exacerbating inequality without policy interventions like reskilling, aligning with IMF calls for adaptability training.3
  • Daron Acemoglu: MIT Nobel laureate (2024) who, with Pascual Restrepo, models automation’s ‘displacement vs productivity effects’. Their framework predicts AI displaces routine tasks but creates complementary roles; however, without incentives for human-AI collaboration, net job losses loom for low-skill youth.5

These theorists underpin IMF models, stressing that AI’s net employment effect hinges on policy: Northern Europe’s success in ‘learning how to learn’ exemplifies adaptive education over rigid skills training.5

Broader Implications

Georgieva urges proactive measures-reskilling youth, bolstering social safety nets, and regulating AI for inclusivity-to avert deepened inequality. Emerging markets face steeper skills gaps, risking divergence from advanced economies.1,3,5 Her personal embrace of tools like Microsoft Copilot underscores individual agency, yet systemic reform remains essential for equitable growth.

References

1. https://www.businesstoday.in/wef-2026/story/wef-summit-davos-2026-ai-jobs-workers-middle-class-labour-market-imf-kristalina-georgieva-512774-2026-01-24

2. https://fortune.com/2026/01/23/imf-chief-warns-ai-tsunami-entry-level-jobs-gen-z-middle-class/

3. https://globaladvisors.biz/2026/01/23/quote-kristalina-georgieva-managing-director-imf/

4. https://www.youtube.com/watch?v=4ANV7yuaTuA

5. https://www.weforum.org/podcasts/meet-the-leader/episodes/ai-skills-global-economy-imf-kristalina-georgieva/

read more
Quote: Kristalina Georgieva – Managing Director, IMF

Quote: Kristalina Georgieva – Managing Director, IMF

“Is the labour market ready [for AI] ? The honest answer is no. Our study shows that already in advanced economies, one in ten jobs require new skills.” – Kristalina Georgieva – Managing Director, IMF

Kristalina Georgieva, Managing Director of the International Monetary Fund (IMF), delivered this stark assessment during a World Economic Forum town hall in Davos in January 2026, amid discussions on growth dilemmas in an AI-driven era1,3,4. Her words underscore the IMF’s latest research revealing that artificial intelligence is already reshaping labour markets, with immediate implications for employment and skills development worldwide5.

Who is Kristalina Georgieva?

Born in 1953 in Bulgaria, Kristalina Georgieva rose through the ranks of international finance with a career marked by economic expertise and crisis leadership. Holding a PhD in economic modelling from Sofia University, she began at the World Bank in 1993, eventually becoming Chief Executive Officer of its Science and Technology division. She served as European Commission Vice-President for Budget and Human Resources from 2014 to 2016, and as CEO of the World Bank Group from 2017. Appointed IMF Managing Director in 2019, she navigated the institution through the COVID-19 pandemic, the global inflation surge, and geopolitical shocks, advocating for fiscal resilience and inclusive growth3,5. Georgieva’s tenure has emphasised data-driven policy, particularly on technology’s societal impacts, making her a pivotal voice on AI’s economic ramifications1.

The Context of the Quote

Spoken at the WEF 2026 Town Hall on ‘Dilemmas around Growth’, the quote reflects IMF analysis showing AI affecting 40% of global jobs-enhanced, eliminated, or transformed-with 60% in advanced economies3,4. Georgieva highlighted that in advanced economies, one in ten jobs already requires new skills, often IT-related, creating supply shortages5. She likened AI’s impact on entry-level roles to a ‘tsunami’, warning of heightened risks for young workers and graduates as routine tasks vanish1,2. Despite productivity gains-potentially boosting global growth by 0.1% to 0.8%-uneven distribution exacerbates inequality, with low-income countries facing only 20-26% exposure yet lacking adaptation infrastructure4.

Leading Theorists on AI and Labour Markets

The IMF’s task-based framework draws from foundational work by economists like David Autor, who pioneered the ‘task approach’ in labour economics. Autor’s research, with co-authors like Frank Levy, posits that jobs consist of discrete tasks, some automatable (routine cognitive or manual) and others not (non-routine creative or interpersonal). AI, unlike prior automation targeting physical routines, encroaches on cognitive tasks, polarising labour markets by hollowing out middle-skill roles3.

Erik Brynjolfsson and Andrew McAfee, MIT scholars and authors of Race Against the Machine (2011) and The Second Machine Age (2014), argue AI heralds a ‘qualitative shift’, automating high-skill analytical work previously safe from machines. Their studies predict widened inequality without intervention, as gains accrue to capital owners and superstars while displacing median workers. Recent IMF-aligned research echoes this, noting AI’s dual potential for productivity surges and job reshaping3,5.

Other influencers include Carl Benedikt Frey and Michael Osborne, whose 2013 Oxford study estimated 47% of US jobs at high automation risk, catalysing global discourse. Their work influenced IMF models, emphasising reskilling urgency3. Georgieva advocates policies inspired by these theorists: massive investment in adaptable skills-‘learning how to learn’-as seen in Nordic models like Finland and Sweden, where flexibility buffers disruption5. Data shows a 1% rise in new skills correlates with 1.3% overall employment growth, countering fears of net job loss5.

Broader Implications

Georgieva’s warning arrives amid economic fragmentation-trade tensions, US-China rivalry, and sluggish productivity (global growth at 3.3% versus pre-pandemic 3.8%)5. AI could reverse this if harnessed equitably, but demands proactive measures: reskilling for vulnerable youth, social protections, and regulatory frameworks to distribute gains. Advanced economies must lead, while supporting emerging markets to avoid an ‘accordion of opportunities’-expanding in the rich world, contracting elsewhere4. Her call to action is clear: policymakers and businesses must use IMF insights to prepare, not react.

References

1. https://fortune.com/2026/01/23/imf-chief-warns-ai-tsunami-entry-level-jobs-gen-z-middle-class/

2. https://timesofindia.indiatimes.com/education/careers/news/ai-is-hitting-entry-level-jobs-like-a-tsunami-imf-chief-kristalina-georgieva-urges-students-to-prepare-for-change/articleshow/127381917.cms

3. https://globaladvisors.biz/2026/01/23/quote-kristalina-georgieva-managing-director-imf/

4. https://www.weforum.org/stories/2026/01/live-from-davos-2026-what-to-know-on-day-2/

5. https://www.weforum.org/podcasts/meet-the-leader/episodes/ai-skills-global-economy-imf-kristalina-georgieva/

read more
Term: Vibe coding

Term: Vibe coding

“Vibe coding is an AI-driven software development approach where users describe desired app features in natural language (the “vibe”), and a Large Language Model (LLM) generates the functional code.” – Vibe coding

Vibe coding is an AI-assisted software development technique where developers describe project goals or features in natural language prompts to a large language model (LLM), which generates the source code; the developer then evaluates functionality through testing and iteration without reviewing, editing, or fully understanding the code itself.1,2

This approach, distinct from traditional AI pair programming or code assistants, emphasises “giving in to the vibes” by focusing on outcomes, rapid prototyping, and conversational refinement rather than code structure or correctness.1,3 Developers act as prompters, guides, testers, and refiners, shifting from manual implementation to high-level direction—e.g., instructing an LLM to “create a user login form” for instant code generation.2 It operates in two levels: a tight iterative loop for refining specific code via feedback, and a broader lifecycle from concept to deployed app.2

Key characteristics include:

  • Natural language as input: Builds on the idea that “the hottest new programming language is English,” bypassing syntax knowledge.1
  • No code inspection: Accepting AI output blindly, verified only by execution results—programmer Simon Willison notes that reviewing code makes it mere “LLM as typing assistant,” not true vibe coding.1
  • Applications: Ideal for prototypes (e.g., Andrej Karpathy’s MenuGen), proofs-of-concept, experimentation, and automating repetitive tasks; less suited for production without added review.1,3
  • Comparisons to traditional coding:
Feature Traditional Programming Vibe Coding
Code Creation Manual line-by-line AI-generated from prompts2
Developer Role Architect, implementer, debugger Prompter, tester, refiner2,3
Expertise Required High (languages, syntax) Lower (functional goals)2
Speed Slower, methodical Faster for prototypes2
Error Handling Manual debugging Conversational feedback2
Maintainability Relies on skill and practices Depends on AI quality and testing2,3

Tools supporting vibe coding include Google AI Studio for prompt-to-app prototyping, Firebase Studio for app blueprints, Gemini Code Assist for IDE integration, GitHub Copilot, and Microsoft offerings—lowering barriers for non-experts while boosting pro efficiency.2,3 Critics highlight risks like unmaintainable code or security issues in production, stressing the need for human oversight.3,6

Best related strategy theorist: Andrej Karpathy. Karpathy coined “vibe coding” in February 2025 via a widely shared post, describing it as “fully giv[ing] in to the vibes, embrac[ing] exponentials, and forget[ting] that the code even exists”—exemplified by his MenuGen prototype, built entirely via LLM prompts with natural language feedback.1 This built on his 2023 claim that English supplants programming languages due to LLM prowess.1

Born in 1986 in Bratislava, Czechoslovakia (now Slovakia), Karpathy earned a BSc in Physics and Computer Science from University of British Columbia (2009), followed by an MSc (2011) and PhD (2015) in Computer Science from University of Toronto under Geoffrey Hinton, a neural networks pioneer. His doctoral work advanced recurrent neural networks (RNNs) for sequence modelling, including char-RNN for text generation.1 Post-PhD, he was a research scientist at Stanford (2015), then Director of AI at Tesla (2017–2022), leading Autopilot vision—scaling ConvNets to massive video data for self-driving cars. In 2023, he co-founded OpenAI’s Supercluster team for GPT training infrastructure before departing in 2024 to launch Eureka Labs (AI education) and advise AI firms.1,3 Karpathy’s career embodies scaling AI paradigms, making vibe coding a logical evolution: from low-level models to natural language commanding complex software, democratising development while embracing AI’s “exponentials.”1,2,3

References

1. https://en.wikipedia.org/wiki/Vibe_coding

2. https://cloud.google.com/discover/what-is-vibe-coding

3. https://news.microsoft.com/source/features/ai/vibe-coding-and-other-ways-ai-is-changing-who-can-build-apps-and-how/

4. https://www.ibm.com/think/topics/vibe-coding

5. https://aistudio.google.com/vibe-code

6. https://stackoverflow.blog/2026/01/02/a-new-worst-coder-has-entered-the-chat-vibe-coding-without-code-knowledge/

7. https://uxplanet.org/i-tested-5-ai-coding-tools-so-you-dont-have-to-b229d4b1a324

read more
Term: Context engineering

Term: Context engineering

“Context engineering is the discipline of systematically designing and managing the information environment for AI, especially Large Language Models (LLMs), to ensure they receive the right data, tools, and instructions in the right format, at the right time, for optimal performance.” – Context engineering

Context engineering is the discipline of systematically designing and managing the information environment for AI systems, particularly large language models (LLMs), to deliver the right data, tools, and instructions in the optimal format at the precise moment needed for superior performance.1,3,5

Comprehensive Definition

Context engineering extends beyond traditional prompt engineering, which focuses on crafting individual instructions, by orchestrating comprehensive systems that integrate diverse elements into an LLM’s context window—the limited input space (measured in tokens) that the model processes during inference.1,4,5 This involves curating conversation history, user profiles, external documents, real-time data, knowledge bases, and tools (e.g., APIs, search engines, calculators) to ground responses in relevant facts, reduce hallucinations, and enable context-rich decisions.1,2,3

Key components include:

  • Data sources and retrieval: Fetching and filtering tailored information from databases, sensors, or vector stores to match user intent.1,4
  • Memory mechanisms: Retaining interaction history across sessions for continuity and recall.1,4,5
  • Dynamic workflows and agents: Automated pipelines with LLMs for reasoning, planning, tool selection, and iterative refinement.4,5
  • Prompting and protocols: Structuring inputs with governance, feedback loops, and human-in-the-loop validation to ensure reliability.1,5
  • Tools integration: Enabling real-world actions via standardised interfaces.1,3,4

Gartner defines it as “designing and structuring the relevant data, workflows and environment so AI systems can understand intent, make better decisions and deliver contextual, enterprise-aligned outcomes—without relying on manual prompts.”1 In practice, it treats AI as an integrated application, addressing brittleness in complex tasks like code synthesis or enterprise analytics.1[11 from 1]

The Six Pillars of Context Engineering

As outlined in technical frameworks, these interdependent elements form the core architecture:4

  • Agents: Orchestrate tasks, decisions, and tool usage.
  • Query augmentation: Refine inputs for precision.
  • Retrieval: Connect to external knowledge bases.
  • Prompting: Guide model reasoning.
  • Memory: Preserve history and state.
  • Tools: Facilitate actions beyond generation.

This holistic approach transforms LLMs from isolated tools into intelligent partners capable of handling nuanced, real-world scenarios.1,3

Best Related Strategy Theorist: Christian Szegedy

Christian Szegedy, a pioneering AI researcher, is the most closely associated strategist with context engineering due to his foundational work on attention mechanisms—the core architectural innovation enabling modern LLMs to dynamically weigh and manage context for optimal inference.1[5 implied via LLM evolution]

Biography

Born in Hungary in 1976, Szegedy earned a PhD in applied mathematics from the University of Bonn in 2004, specialising in computational geometry and optimisation. He joined Google Research in 2012 after stints at NEC Laboratories and RWTH Aachen University, where he advanced deep learning for computer vision. Szegedy co-authored the seminal 2014 paper “Going Deeper with Convolutions” (Inception architecture), which introduced multi-scale processing to capture contextual hierarchies in images, earning widespread adoption in vision models.[context from knowledge, aligned with AI evolution in 1]

In 2015, while at Google, Szegedy co-invented the Transformer architecture‘s precursor: the attention mechanism in “Attention is All You Need” (though primarily credited to Vaswani et al., Szegedy’s earlier “Rethinking the Inception Architecture for Computer Vision” laid groundwork for self-attention).[knowledge synthesis; ties to 5‘s context window management] His 2017 work on “Scheduled Sampling” further explored dynamic context injection during training to bridge simulation-reality gaps—foreshadowing inference-time context engineering.

Relationship to Context Engineering

Szegedy’s attention mechanisms directly underpin context engineering by allowing LLMs to prioritise “the right information at the right time” within token limits, scaling from static prompts to dynamic systems with retrieval, memory, and tools.3,4,5 In agentic workflows, attention curates evolving contexts (e.g., filtering agent trajectories), as seen in Anthropic’s strategies.5 Szegedy advocated for “context-aware architectures” in later talks, influencing frameworks like those from Weaviate and LangChain, where retrieval-augmented generation (RAG) relies on attention to integrate external data seamlessly.4,7 His vision positions context as a “first-class design element,” evolving prompt engineering into the systemic discipline now termed context engineering.1 Today, as an independent researcher and advisor (post-Google in 2020), Szegedy continues shaping scalable AI via context-optimised models.

References

1. https://intuitionlabs.ai/articles/what-is-context-engineering

2. https://ramp.com/blog/what-is-context-engineering

3. https://www.philschmid.de/context-engineering

4. https://weaviate.io/blog/context-engineering

5. https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents

6. https://www.llamaindex.ai/blog/context-engineering-what-it-is-and-techniques-to-consider

7. https://blog.langchain.com/context-engineering-for-agents/

read more
Podcast – The Real AI Signal from Davos 2026

Podcast – The Real AI Signal from Davos 2026

While the headlines from Davos were dominated by geopolitical conflict and debates on AGI timelines and asset bubbles, a different signal emerged from the noise. It wasn’t about if AI works, but how it is being ruthlessly integrated into the real economy.

In our latest podcast, we break down the “Diffusion Strategy” defining 2026.

3 Key Takeaways:

  1. China and the “Global South” are trying to leapfrog: While the West debates regulation, emerging economies are treating AI as essential infrastructure.
    • China has set a goal for 70% AI diffusion by 2027.
    • The UAE has mandated AI literacy in public schools from K-12.
    • Rwanda is using AI to quadruple its healthcare workforce.
  2. The Rise of the “Agentic Self”: We aren’t just using chatbots anymore; we are employing agents. Entrepreneur Steven Bartlett revealed he has established a “Head of Experimentation and Failure” to use AI to disrupt his own business before competitors do. Musician will.i.am argued that in an age of predictive machines, humans must cultivate their “agentic self” to handle the predictable, while remaining unpredictable themselves.
  3. Rewiring the Core: Uber’s CEO Dara Khosrowshahi noted the difference between an “AI veneer” and a fundamental rewire. It’s no longer about summarising meetings; it’s about autonomous agents resolving customer issues without scripts.

The Global Advisors Perspective: Don’t wait for AGI. The current generation of models is sufficient to drive massive value today. The winners will be those who control their “sovereign capabilities” – embedding their tacit knowledge into models they own.

Read our original perspective here – https://with.ga/w1bd5

Listen to the full breakdown here – https://with.ga/2vg0z
While the headlines from Davos were dominated by geopolitical conflict and debates on AGI timelines and asset bubbles, a different signal emerged from the noise. It wasn't about if AI works, but how it is being ruthlessly integrated into the real economy.

read more
Term: Prompt engineering

Term: Prompt engineering

“Prompt engineering is the practice of designing, refining, and optimizing the instructions (prompts) given to generative AI models to guide them into producing accurate, relevant, and desired outputs.” – Prompt engineering

Prompt engineering is the practice of designing, refining, and optimising instructions—known as prompts—given to generative AI models, particularly large language models (LLMs), to elicit accurate, relevant, and desired outputs.1,2,3,7

This process involves creativity, trial and error, and iterative refinement of phrasing, context, formats, words, and symbols to guide AI behaviour effectively, making applications more efficient, flexible, and capable of handling complex tasks.1,4,5 Without precise prompts, generative AI often produces generic or suboptimal responses, as models lack fixed commands and rely heavily on input structure to interpret intent.3,6

Key Benefits

  • Improved user experience: Users receive coherent, bias-mitigated responses even with minimal input, such as tailored summaries for legal documents versus news articles.1
  • Increased flexibility: Domain-neutral prompts enable reuse across processes, like identifying inefficiencies in business units without context-specific data.1
  • Subject matter expertise: Prompts direct AI to reference correct sources, e.g., generating medical differential diagnoses from symptoms.1
  • Enhanced security: Helps mitigate prompt injection attacks by refining logic in services like chatbots.2

Core Techniques

  • Generated knowledge prompting: AI first generates relevant facts (e.g., deforestation effects like climate change and biodiversity loss) before completing tasks like essay writing.1
  • Contextual refinement: Adding role-playing (e.g., “You are a sales assistant”), location, or specifics to vague queries like “Where to purchase a shirt.”1,5
  • Iterative testing: Trial-and-error to optimise for accuracy, often encapsulated in base prompts for scalable apps.2,5

Prompt engineering bridges end-user inputs with models, acting as a skill for developers and a step in AI workflows, applicable in fields like healthcare, cybersecurity, and customer service.2,5

Best Related Strategy Theorist: Lilian Weng

Lilian Weng, Director of Applied AI Safety at OpenAI, stands out as the premier theorist linking prompt engineering to strategic AI deployment. Her seminal 2023 blog post, “Prompt Engineering Guide”, systematised techniques like chain-of-thought prompting, few-shot learning, and self-consistency, providing a foundational framework that influenced industry practices and tools from AWS to Google Cloud.1,4

Weng’s relationship to the term stems from her role in advancing reliable LLM interactions post-ChatGPT’s 2022 launch. At OpenAI, she pioneered safety-aligned prompting strategies, addressing hallucinations and biases—core challenges in generative AI—making her work indispensable for enterprise-scale optimisation.1,2 Her guide emphasises strategic structuring (e.g., role assignment, step-by-step reasoning) as a “roadmap” for desired outputs, directly shaping modern definitions and techniques like generated knowledge prompting.1,4

Biography: Born in China, Weng earned a PhD in Machine Learning from McGill University (2015), focusing on computational neuroscience and reinforcement learning. She joined OpenAI in 2018 as a research scientist, rising to lead long-term safety efforts amid rapid AI scaling. Previously at Microsoft Research (2016–2018), she specialised in hierarchical RL for robotics. Weng’s contributions extend to publications on emergent abilities in LLMs and AI alignment, with her GitHub repository on prompting garnering millions of views. As of 2026, she continues shaping ethical AI strategies, blending theoretical rigour with practical engineering.7

References

1. https://aws.amazon.com/what-is/prompt-engineering/

2. https://www.coursera.org/articles/what-is-prompt-engineering

3. https://uit.stanford.edu/service/techtraining/ai-demystified/prompt-engineering

4. https://cloud.google.com/discover/what-is-prompt-engineering

5. https://www.oracle.com/artificial-intelligence/prompt-engineering/

6. https://genai.byu.edu/prompt-engineering

7. https://en.wikipedia.org/wiki/Prompt_engineering

8. https://www.ibm.com/think/topics/prompt-engineering

9. https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-prompt-engineering

10. https://github.com/resources/articles/what-is-prompt-engineering

read more
Quote: Matt Sheehan

Quote: Matt Sheehan

“The Chinese chip industry has done an amazing job of catching up. I think they’ve probably exceeded most people’s expectations in this.” – Matt Sheehan – Carnegie Endowment for International Peace

Matt Sheehan’s remark captures a central surprise of the last decade in geopolitics and technology: the speed and resilience of China’s semiconductor ascent under heavy external pressure.

At the heart of this story is China’s effort to close what used to look like an unbridgeable gap with the United States, Taiwan, South Korea, Japan, and Europe in advanced chips, tools, and know-how. National programs such as “Made in China 2025” explicitly targeted semiconductors as a strategic chokepoint, aiming to localize production and reduce dependence on foreign suppliers in logic chips, memory, and manufacturing equipment.2 This was initially greeted with skepticism in many Western capitals and boardrooms, where the prevailing assumption was that export controls, restrictions on advanced tools, and China’s own technological lag would keep it permanently behind the frontier.

Sheehan’s observation points to where expectations proved wrong. Despite sweeping export controls on leading-edge lithography tools and high-end AI chips, Chinese firms have made faster-than-anticipated progress across the stack:

  • In manufacturing equipment, domestic suppliers have rapidly increased their share in key process steps such as etching and thin-film deposition.1,4 By 2025, the share of domestically developed semiconductor equipment in China’s fabs had risen to about 35%, overshooting Beijing’s 30% target for that year.1 Local champions like Naura and AMEC have pushed into complex tools, delivering CVD, ALD, and other thin-film equipment for advanced memory and logic production lines used by major Chinese foundries such as SMIC and Huahong.1,4
  • In capital investment and ecosystem depth, mainland China has become the largest market in the world for semiconductor manufacturing equipment, with projected spending around $39 billion in 2026—more than Taiwan or South Korea.4 This spending fuels a dense local ecosystem of design houses, foundries, packaging firms, and toolmakers that did not exist at comparable scale a decade earlier.
  • In AI and accelerator chips, Chinese firms have developed increasingly capable domestic alternatives even as they still seek access to high-end Nvidia GPUs. China’s AI sector drew global attention in 2025 with breakthroughs by firms such as DeepSeek, whose large models forced global competitors to reassess Chinese capabilities.5 At the same time, Beijing has leveraged its regulatory power to steer large platforms such as Alibaba and ByteDance toward a mix of imported and home-grown accelerators, explicitly tying access to Nvidia chips (like the H200) to parallel purchases of Chinese solutions.3,5 This policy mix illustrates how industrial strategy and geopolitical bargaining are being fused to accelerate domestic chip progress while still tapping global technology where possible.3
  • In memory and specialty devices, companies like Yangtze Memory Technologies (YMTC) have moved up the learning curve in 3D NAND and are investing heavily in further technology upgrades, DRAM development, and forward-looking R&D that demand increasingly sophisticated domestically supplied equipment.1,4 These investments both absorb and shape the capabilities of the Chinese toolmakers that Sheehan has in mind.1,4

Sheehan’s quote is also rooted in the broader geopolitical context he studies: the U.S.–China technology rivalry, where semiconductors are the most strategically sensitive terrain. Washington’s use of export controls on advanced lithography, EDA tools, and high-end AI chips was designed to “slow the pace” of Chinese military-relevant innovation. The expectation in many Western policy circles was that these controls would significantly impede Chinese progress. Instead, controls have:

  • Reshaped China’s development path—from importing at the frontier to building domestically at one or two nodes behind it.
  • Accelerated Beijing’s urgency to build local capability in areas once left to foreign suppliers, such as inspection and metrology tools, deposition, and etch.1,4
  • Incentivized enormous sunk investment and political attention to semiconductors in China’s five-year plans, where AI and chips now sit at the very center of national strategy.5

Although China still faces real bottlenecks—most notably in extreme ultraviolet (EUV) lithography, highly specialized tools, and some advanced process nodes—its system-level catch-up has been broader and quicker than many analysts predicted.2,5 That is the gap between expectation and reality that Sheehan is highlighting.

Matt Sheehan: The voice behind the quote

Matt Sheehan is a leading analyst of the intersection between China, technology, and global politics. At the Carnegie Endowment for International Peace, he has focused on how AI, semiconductors, and data flows shape the strategic competition between the United States and China. His work sits at the frontier of what is often called “digital geopolitics”: the study of how code, chips, and compute influence power, security, and economic advantage.

Sheehan’s analysis is distinctive for three reasons:

  • He combines on-the-ground understanding of Chinese policy and industry with close attention to U.S. regulatory moves, giving him a bilateral vantage point.
  • He approaches policy not just through national security, but also through the innovation ecosystem—research labs, startups, open-source communities, and global supply chains.
  • He emphasizes unexpected feedback loops: how U.S. restrictions can accelerate Chinese localization; how Chinese AI advances can reshape debates in Washington, Brussels, and Tokyo; and how commercial competition and security fears reinforce each other.

This background makes his judgment on the pace of Chinese semiconductor catch-up particularly salient: he is not an industry booster, but a policy analyst who has watched the interplay of strategy, regulation, and technology on both sides.

The broader intellectual backdrop: leading theorists of technology, catch-up, and geopolitics

Behind a seemingly simple observation about China’s chip industry lies a rich body of theory about how countries catch up technologically, how innovation moves across borders, and how geopolitics shapes advanced industries. Several intellectual traditions are especially relevant.

1. Late industrialization and the “catch-up” state

Key figures: Alexander Gerschenkron, Alice Amsden, Ha-Joon Chang

  • Alexander Gerschenkron argued that “latecomer” countries industrialize differently from pioneers: they rely more heavily on state intervention, banks, and large industrial enterprises to compress decades of technological learning into a shorter period. China’s semiconductor push—state planning, giant national champions, directed finance, and targeted technology acquisition—is a textbook example of this latecomer pattern.
  • Alice Amsden studied how economies like South Korea used targeted industrial policy, performance standards, and learning-by-doing to build globally competitive heavy and high-tech industries. Her emphasis on reciprocal control mechanisms—state support in exchange for performance—echoes in China’s mix of subsidies and hard metrics for chip firms (e.g., equipment localization targets, process-node milestones).
  • Ha-Joon Chang brought this tradition into debates about globalization, arguing that today’s rich countries used aggressive industrial policies before later pushing “free-market” rules on latecomers. China’s semiconductor strategy—protecting and promoting domestic champions while acquiring foreign technology—is consistent with this “infant industry” logic, applied to the most complex manufacturing sector on earth.

These theorists provide the conceptual lens for understanding why China’s catch-up was plausible despite skepticism: latecomer states, given enough capital, policy focus, and market size, can leap across technological stages faster than many linear forecasts assume.

2. National innovation systems and technology policy

Key figures: Christopher Freeman, Bengt-Åke Lundvall, Richard Nelson, Mariana Mazzucato

  • Christopher Freeman and Bengt-Åke Lundvall developed the idea of national innovation systems: webs of firms, universities, government agencies, and financial institutions that co-evolve to generate and diffuse innovation. China’s semiconductor rise reflects a deliberate effort to construct such a system around chips, combining universities, state labs, SOEs, private giants (like Alibaba and Huawei), and policy banks.
  • Richard Nelson emphasized how governments shape technological trajectories through defense spending, procurement, and research funding. U.S. policies around semiconductors and AI mirrors this; China’s own national funds and state procurement echo similar mechanisms, but at enormous scale.
  • Mariana Mazzucato introduced the idea of the “entrepreneurial state”, arguing that the public sector often takes the riskiest, most uncertain bets in breakthrough technologies. China’s massive and politically risky bets on semiconductor self-reliance—despite early policy failures and wasted capital—are a stark, real-time illustration of this concept.

These frameworks show why China’s chip gains are not just about firm-level success, but about system-level design: how policy, finance, and research infrastructure have been orchestrated to accelerate domestic capability.

3. Global value chains and “smile curves”

Key figures: Gary Gereffi, Timothy Sturgeon, Michael Porter

  • Gary Gereffi and Timothy Sturgeon analyzed how industries fragment into global value chains, with design, manufacturing, and services allocated across countries according to capabilities and policy regimes. Semiconductors are the archetype: U.S. firms dominate GPUs and EDA tools; Taiwanese and Korean firms dominate advanced wafer fabrication and memory; Dutch and Japanese firms produce critical tools; Chinese firms historically concentrated on assembly, packaging, and lower-end fabrication.
  • In this framework, export controls and industrial policies are attempts to reshape where in the chain China sits—from lower-value segments toward high-value design, advanced fabrication, and toolmaking.2
  • The “smile curve” metaphor (popularized by Acer’s Stan Shih and linked to strategy thinkers like Michael Porter) suggests that value accrues at the edges: upstream in R&D and design, and downstream in brands, platforms, and services. For years, China captured more value in downstream device assembly and domestic platforms; Sheehan’s quote highlights China’s effort to climb the upstream side of the smile curve into high-value chip design and equipment.

4. Technology, geopolitics, and “weaponized interdependence”

Key figures: Henry Farrell, Abraham Newman, Michael Beckley, Graham Allison

  • Henry Farrell and Abraham Newman advanced the concept of “weaponized interdependence”: states that control key hubs in global networks—financial, digital, or industrial—can use that position for coercive leverage. U.S. control over advanced lithography, chip design IP, and high-end AI hardware is one of the clearest real-world illustrations of this idea.
  • The use of export controls and entity lists against Chinese tech firms is an application of this theory; China’s accelerated semiconductor localization is, in turn, a strategy to escape vulnerability to that leverage.
  • Analysts such as Michael Beckley and Graham Allison focus on U.S.–China strategic competition, emphasizing how control of technologies like semiconductors shapes long-term power balances. For them, the pace of China’s chip catch-up is a central variable in the evolving balance of power.

Sheehan’s quote sits squarely in this intellectual conversation: it is an empirical judgment that bears directly on theories about whether technological chokepoints are sustainable and how quickly a targeted great power can adjust.

5. AI, compute, and the geopolitics of chips

Key figures: Jack Clark, Allan Dafoe, Daron Acemoglu, Ajay Agrawal

  • Researchers of AI governance and economics increasingly treat compute and semiconductors as the strategic bottleneck for AI progress. Analysts like Jack Clark have emphasized how access to advanced accelerators shapes which countries can realistically train frontier models.
  • Economists such as Daron Acemoglu and Ajay Agrawal highlight how AI and automation interact with productivity, inequality, and industrial structure. In China, AI and chips are now deeply intertwined: domestic AI labs both depend on and stimulate demand for advanced chips; chips, in turn, are justified politically as enablers of AI and digital sovereignty.2,5
  • The result is a feedback loop: AI breakthroughs (such as those highlighted by Xi Jinping in 2025) strengthen the case for aggressive semiconductor policy; semiconductor gains then enable more ambitious AI projects.5

This body of work provides the conceptual scaffolding for understanding why a statement about Chinese chip catch-up is not just about manufacturing, but about the future distribution of AI capability, economic power, and geopolitical influence.


Placed against this backdrop, Matt Sheehan’s line is more than a passing compliment to Chinese engineers. It crystallizes a broader reality: in one of the world’s most complex, capital-intensive, and tightly controlled industries, China has closed more of the gap, more quickly, under more adverse conditions than most experts anticipated. That surprise is now reshaping policy debates in Washington, Brussels, Tokyo, Seoul, and Taipei—and forcing a re-examination of many long-held assumptions about how fast latecomers can move at the technological frontier.

 

References

1. https://www.scmp.com/tech/big-tech/article/3339366/great-chip-leap-chinas-semiconductor-equipment-self-reliance-surges-past-targets

2. https://www.techinsights.com/chinese-semiconductor-developments

3. https://www.tomshardware.com/tech-industry/china-expected-to-approve-h200-imports-in-early-2026-report-claims-tech-giants-alibaba-and-bytedance-reportedly-ready-to-order-over-200-000-nvidia-chips-each-if-green-lit-by-beijing

4. https://eu.36kr.com/en/p/3634463429494016

5. https://dig.watch/updates/china-ai-breakthroughs-xi-jinping

6. https://expertnetworkcalls.com/93/semiconductor-market-outlook-key-trends-and-challenges-in-2026

7. https://sourceability.com/post/whats-ahead-in-2026-for-the-semiconductor-industry

8. https://www.pwc.com/gx/en/industries/technology/pwc-semiconductor-and-beyond-2026-full-report.pdf

 

read more
The AI Signal from The World Economic Forum 2026 at Davos

The AI Signal from The World Economic Forum 2026 at Davos

Davos 2026 ( WEF26 ) signalled a clear shift in the AI conversation: less speculation, more execution. For most corporates, the infrastructure stack matters, but it will be accessed via hyperscalers and service providers rather than built internally. The more relevant question is what happens inside the organisation once the capability is available.

A consistent theme across discussions: progress is coming from pragmatic leaders who are treating AI as an operating model change, not a technology project. That means building basic literacy across the workforce, redesigning workflows, and being willing to challenge legacy assumptions about how work gets done.

In the full write-up:

  • The shift from “AI theatre” to ROI and deployment reality
  • The five-layer AI stack (and why corporates mostly consume it via partners)
  • The emerging sixth layer: user readiness — and why it is becoming decisive
  • Energy and infrastructure constraints as real-world brakes on scale
  • Corporate pragmatism: moving beyond an “AI veneer” to process redesign and agentic workflows
  • Labour market implications: skills shifts, entry-level hollowing, and what employers must do now
  • The Global South dimension: barriers, pathways to competitiveness, and practical adoption strategies
  • Second-order risks: cyber exposure, mental health, and cognitive atrophy as governance issues

If you’re leading a business, the takeaway is straightforward: there are strong lessons from pragmatic programs outside of Silicon Valley.

read more
Quote: Kristalina Georgieva – Managing Director, IMF

Quote: Kristalina Georgieva – Managing Director, IMF

“We assess that 40% of jobs globally are going to be impacted by AI over the next couple of years – either enhanced, eliminated, or transformed. In advanced economies, it’s 60%.” – Kristalina Georgieva – Managing Director, IMF

Kristalina Georgieva’s assessment of AI’s labour market impact represents one of the most consequential economic forecasts of our time. Speaking at the World Economic Forum in Davos in January 2026, the Managing Director of the International Monetary Fund articulated a sobering reality: artificial intelligence is not a distant threat but an immediate force already reshaping employment globally. Her invocation of a “tsunami”-a natural disaster of overwhelming force and scale-captures the simultaneity and inevitability of this transformation.

The Scale of Disruption

Georgieva’s figures warrant careful examination. The IMF calculates that 40 per cent of jobs globally will be touched by AI, with each affected role falling into one of three categories: enhancement (where AI augments human capability), elimination (where automation replaces human labour), or transformation (where roles are fundamentally altered without necessarily improving compensation). This is not speculative projection but empirical assessment grounded in IMF research across member economies.

The geographical disparity is striking and consequential. In advanced economies-the United States, Western Europe, Japan, and similar developed nations-the figure reaches 60 per cent. By contrast, in low-income countries, the impact ranges from 20 to 26 per cent. This divergence is not accidental; it reflects the concentration of AI infrastructure, capital investment, and digital integration in wealthy nations. The IMF’s concern, as Georgieva articulated, is what she termed an “accordion of opportunities”-a compression and expansion of economic possibility that varies dramatically by geography and development status.

Understanding the Context: AI as Economic Transformation

Georgieva’s warning must be situated within the broader economic moment of early 2026. The global economy faces simultaneous pressures: geopolitical fragmentation, demographic shifts, climate transition, and technological disruption occurring in parallel. AI is not the sole driver of economic uncertainty, but it is perhaps the most visible and immediate.

The IMF’s analysis distinguishes between AI’s productivity benefits and its labour market risks. Georgieva acknowledged that AI is generating genuine economic gains across sectors-agriculture, healthcare, education, and transport have all experienced productivity enhancements. Translation and interpretation services have been enhanced rather than eliminated; research analysts have found their work augmented by AI tools. Yet these gains are unevenly distributed, and the labour market adjustment required is unprecedented in speed and scale.

The productivity question is central to Georgieva’s economic outlook. Global growth has been underwhelming in recent years, with productivity growth stagnant except in the United States. AI represents the most potent force for reversing this trend, with potential to boost global growth between 0.1 and 0.8 per cent annually. A 0.8 per cent productivity gain would restore growth to pre-pandemic levels. Yet this upside scenario depends entirely on successful labour market adjustment and equitable distribution of AI’s benefits.

The Theoretical Foundations: Labour Economics and Technological Disruption

Georgieva’s analysis draws on decades of labour economics scholarship examining technological displacement. The intellectual lineage traces to economists such as David Autor, who has extensively studied how technological change reshapes labour markets. Autor’s research demonstrates that whilst technology eliminates routine tasks, it simultaneously creates demand for new skills and complementary labour. However, this adjustment is neither automatic nor painless; workers displaced from routine cognitive tasks often face years of unemployment or underemployment before transitioning to new roles.

The “task-based” framework of labour economics-developed by scholars including Autor and Frank Levy-provides the theoretical scaffolding for understanding AI’s impact. Rather than viewing jobs as monolithic units, this approach recognises that occupations comprise multiple tasks. AI may automate certain tasks within a role whilst leaving others intact, fundamentally altering job content and skill requirements. A radiologist’s role, for instance, may be transformed by AI’s superior pattern recognition in image analysis, but the radiologist’s diagnostic judgment, patient communication, and clinical decision-making remain valuable.

Erik Brynjolfsson and Andrew McAfee, prominent technology economists, have argued that AI represents a qualitative shift from previous technological waves. Unlike earlier automation, which primarily affected routine manual labour, AI threatens cognitive work across income levels. Their research suggests that without deliberate policy intervention, AI could exacerbate inequality rather than reduce it, concentrating gains among capital owners and highly skilled workers whilst displacing middle-skill employment.

Daron Acemoglu, the MIT economist, has been particularly critical of “so-so automation”-technology that increases productivity marginally whilst displacing workers without creating sufficient new opportunities. His work emphasises that technological outcomes are not predetermined; they depend on institutional choices, investment priorities, and policy frameworks. This perspective is crucial for understanding Georgieva’s policy recommendations.

The Policy Imperative

Georgieva’s framing of the challenge as a policy problem rather than an inevitable outcome reflects this economic thinking. She has consistently advocated for three policy pillars: investment in skills development, meaningful regulation and ethical frameworks, and ensuring AI’s benefits penetrate across sectors and geographies rather than concentrating in advanced economies.

The IMF’s own research indicates that one in ten jobs in advanced economies already require substantially new skills-a figure that will accelerate. Yet educational and training systems globally remain poorly aligned with AI-era skill demands. Georgieva has urged governments to invest in reskilling programmes, particularly targeting workers in roles most vulnerable to displacement.

Her emphasis on regulation and ethics reflects growing recognition that AI’s trajectory is not technologically determined. The choice between AI as a tool for broad-based productivity enhancement versus a mechanism for labour displacement and inequality concentration remains open. This aligns with the work of scholars such as Shoshana Zuboff, who argues that technological systems embody political choices about power distribution and social organisation.

The Global Inequality Dimension

Perhaps most significant is Georgieva’s concern about the “accordion of opportunities.” The 60 per cent figure for advanced economies versus 20-26 per cent for low-income countries reflects not merely different levels of AI adoption but fundamentally different economic trajectories. Advanced economies possess the infrastructure, capital, and institutional capacity to invest in AI whilst simultaneously managing labour market transition. Low-income countries risk being left behind-neither benefiting from AI’s productivity gains nor receiving the investment in skills and social protection that might cushion displacement.

This concern echoes the work of development economists such as Dani Rodrik, who has documented how technological change can bypass developing economies entirely, leaving them trapped in low-productivity sectors. If AI concentrates in advanced economies and wealthy sectors, developing nations may face a new form of technological colonialism-dependent on imported AI solutions without developing indigenous capacity or capturing value creation.

The Measurement Challenge

Georgieva’s 40 per cent figure, whilst grounded in IMF research, represents a probabilistic assessment rather than a precise prediction. The IMF acknowledges a “fairly big range” of potential impacts on global growth (0.1 to 0.8 per cent), reflecting genuine uncertainty about AI’s trajectory. This uncertainty itself is significant; it suggests that outcomes remain contingent on policy choices, investment decisions, and institutional responses.

The distinction between jobs “touched” by AI and jobs eliminated is crucial. Enhancement and transformation may be preferable to elimination, but they still require worker adjustment, skill development, and potentially geographic mobility. A job that is transformed but offers no wage improvement-as Georgieva noted-may be economically worse for the worker even if technically retained.

The Broader Economic Context

Georgieva’s warning arrives amid broader economic fragmentation. Trade tensions, geopolitical competition, and the shift from a rules-based global economic order toward competing blocs create additional uncertainty. AI development is increasingly intertwined with strategic competition between major powers, particularly between the United States and China. This geopolitical dimension means that AI’s labour market impact cannot be separated from questions of technological sovereignty, supply chain resilience, and economic security.

The IMF chief has also emphasised that AI’s benefits are not automatic. She personally undertook training in AI productivity tools, including Microsoft Copilot, and urged IMF staff to embrace AI-based enhancements. Yet this individual adoption, multiplied across millions of workers and organisations, requires deliberate choice, investment in training, and organisational restructuring. The productivity gains Georgieva projects depend on this active embrace rather than passive exposure to AI technology.

Implications for Policy and Strategy

Georgieva’s analysis suggests several imperatives for policymakers. First, labour market adjustment cannot be left to market forces alone; deliberate investment in education, training, and social protection is essential. Second, the distribution of AI’s benefits matters as much as aggregate productivity gains; without attention to equity, AI could deepen inequality within and between nations. Third, regulation and ethical frameworks must be established proactively rather than reactively, shaping AI development toward socially beneficial outcomes.

Her invocation of a “tsunami” is not mere rhetoric but a precise characterisation of the challenge’s scale and urgency. Tsunamis cannot be prevented, but their impact can be mitigated through preparation, early warning systems, and coordinated response. Similarly, AI’s labour market impact is largely inevitable, but its consequences-whether broadly shared prosperity or concentrated disruption-remain subject to human choice and institutional design.

References

1. https://economictimes.com/news/india/ashwini-vaishnaw-at-davos-2026-5-key-takeaways-highlighting-indias-semiconductor-pitch-and-roadmap-to-ai-sovereignty-at-wef/slideshow/127145496.cms

2. https://time.com/collections/davos-2026/7339218/ai-trade-global-economy-kristalina-georgieva-imf/

3. https://www.ndtv.com/world-news/a-tsunami-is-hitting-labour-market-international-monetary-fund-imf-chief-kristalina-georgieva-warns-of-ai-impact-10796739

4. https://www.youtube.com/watch?v=4ANV7yuaTuA

5. https://www.weforum.org/stories/2026/01/live-from-davos-2026-what-to-know-on-day-2/

6. https://www.perplexity.ai/page/ai-impact-on-jobs-debated-as-l-_a7uZvVcQmWh3CsTzWfkbA

7. https://www.imf.org/en/blogs/articles/2024/01/14/ai-will-transform-the-global-economy-lets-make-sure-it-benefits-humanity

read more
Quote: Kristalina Georgieva – Managing Director, IMF

Quote: Kristalina Georgieva – Managing Director, IMF

“Productivity growth has been slow over the last two decades. AI holds a promise to significantly lift it. We calculated that the impact on global growth could be between 0,1% and 0,8%. That is very significant. However, it is happening incredibly quickly.” – Kristalina Georgieva – Managing Director, IMF

Kristalina Georgieva, Managing Director of the International Monetary Fund, has emerged as one of the most influential voices in the global conversation about artificial intelligence’s economic impact. Her observation about productivity growth-and AI’s potential to reverse it-reflects a fundamental shift in how policymakers understand the relationship between technological innovation and economic resilience.

The Productivity Crisis That Defined Two Decades

To understand Georgieva’s urgency about AI, one must first grasp the economic malaise that has characterised the past twenty years. Since the 2008 financial crisis, advanced economies have experienced persistently weak productivity growth-the measure of how much output an economy generates per unit of input. This sluggish productivity has become the primary culprit behind anaemic economic growth across developed nations. Georgieva has repeatedly emphasised that approximately half of the slow growth experienced globally stems directly from this productivity deficit, a structural problem that conventional policy tools have struggled to address.

This two-decade productivity drought represents more than a statistical curiosity. It reflects an economy that, despite technological advancement, has failed to translate innovation into widespread efficiency gains. Workers produce less per hour worked. Businesses struggle to achieve meaningful cost reductions. Investment returns diminish. The result is an economy trapped in a low-growth equilibrium, unable to generate the dynamism required to address mounting fiscal challenges, rising inequality, and demographic pressures.

AI as Economic Catalyst: The Quantified Promise

Georgieva’s confidence in AI stems from rigorous analysis rather than technological evangelism. The IMF has calculated that artificial intelligence could boost global growth by between 0.1 and 0.8 percentage points-a range that, whilst appearing modest in isolation, becomes transformative when contextualised against current growth trajectories. For an advanced economy growing at 1-2 percent annually, an additional 0.8 percentage points represents a 40-80 percent acceleration. For developing economies, the multiplier effect could be even more pronounced.

This quantification matters because it grounds AI’s potential in measurable economic impact rather than speculative hype. The IMF’s methodology reflects analysis of AI’s capacity to enhance productivity across multiple sectors-from agriculture and healthcare to education and transportation. Unlike previous technological revolutions that took decades to diffuse through economies, AI applications are already penetrating operational workflows at unprecedented speed.

The Velocity Problem: Why Speed Reshapes the Equation

Georgieva’s most critical insight concerns not the magnitude of AI’s impact but its velocity. Technological transformations typically unfold gradually, allowing labour markets, educational systems, and social safety nets time to adapt. The Industrial Revolution took generations. The digital revolution unfolded over decades. AI, by contrast, is compressing transformation into years.

This acceleration creates what Georgieva describes as a “tsunami” effect on labour markets. The IMF’s assessment indicates that 40 percent of global jobs will be impacted by AI within the coming years-either enhanced through augmentation, fundamentally transformed, or eliminated entirely. In advanced economies, the figure rises to 60 percent. Simultaneously, preliminary data suggests that one in ten jobs in advanced economies already require new skills, a proportion that will accelerate dramatically.

The velocity problem generates a dual challenge: whilst AI promises to solve the productivity crisis that has constrained growth for two decades, it simultaneously threatens to outpace society’s capacity to manage labour market disruption. This is why Georgieva emphasises that the economic benefits of AI cannot be assumed to distribute evenly or automatically. The speed of technological change can easily outstrip the speed of policy adaptation, education reform, and social support systems.

Theoretical Foundations: Understanding Productivity and Growth

Georgieva’s analysis builds upon decades of economic theory regarding the relationship between productivity and growth. The Solow growth model, developed by Nobel laureate Robert Solow in the 1950s, established that long-term economic growth depends primarily on technological progress and productivity improvements rather than capital accumulation alone. This framework explains why economies with similar capital stocks can diverge dramatically based on their capacity to innovate and improve efficiency.

The productivity slowdown that has characterised recent decades puzzled economists, leading to what some termed the “productivity paradox”-the observation that despite massive investment in information technology, measured productivity growth remained disappointingly weak. Erik Brynjolfsson and Andrew McAfee, leading scholars of technology’s economic impact, have argued that this paradox reflects a measurement problem: much of technology’s benefit accrues as consumer surplus rather than measured output, and the transition period between technological eras involves disruption that temporarily suppresses measured productivity.

AI potentially resolves this paradox by offering productivity gains that are both measurable and broad-based. Unlike previous waves of automation that concentrated benefits in specific sectors, AI’s general-purpose nature means it can enhance productivity across virtually every economic activity. This aligns with the theoretical work of economists like Daron Acemoglu, who emphasises that sustained growth requires technologies that complement rather than simply replace human labour, creating new opportunities for value creation.

The IMF’s Institutional Perspective

As Managing Director of the IMF, Georgieva speaks from an institution uniquely positioned to assess global economic trends. The Fund monitors economic performance across 190 member countries, providing unparalleled visibility into comparative growth patterns, labour market dynamics, and policy effectiveness. Her warnings about AI’s labour market impact carry weight precisely because they emerge from this comprehensive global perspective rather than from any single national vantage point.

The IMF’s own experience with AI implementation reinforces Georgieva’s optimism about productivity gains. As a data-intensive institution, the Fund has deployed AI-powered tools to enhance analytical capacity, accelerate research, and improve forecasting accuracy. Georgieva has personally engaged with productivity-enhancing AI tools, including Microsoft Copilot and fund-specific AI assistants, and reports measurable gains in institutional output. This first-hand experience lends credibility to her broader claims about AI’s transformative potential.

The Policy Imperative: Managing Transformation

Georgieva’s framing of AI’s impact as both opportunity and risk reflects a sophisticated understanding of technological change. The productivity gains she describes will not materialise automatically; they require deliberate policy choices. For advanced economies, she counsels concentration on three areas: ensuring AI penetration across all economic sectors rather than concentrating benefits in technology-intensive industries; establishing meaningful regulatory frameworks that reduce risks of misuse and unintended consequences; and building ethical foundations that maintain public trust in AI systems.

Critically, Georgieva emphasises that the labour market challenge demands proactive intervention. The speed of AI adoption means that waiting for market forces to naturally realign skills and employment will result in unnecessary disruption and inequality. Instead, she advocates for policies that support reskilling, particularly targeting workers in roles most vulnerable to displacement. The IMF’s research suggests that higher-skilled workers benefit disproportionately from AI augmentation, creating a risk of widening inequality unless deliberate efforts ensure that lower-skilled workers also gain access to AI-enhanced productivity tools.

Global Context: Divergence and Opportunity

Georgieva’s analysis of AI’s growth potential must be understood within the broader context of global economic divergence. The United States, which has emerged as the global leader in large-language model development and AI commercialisation, stands to capture disproportionate benefits from AI-driven productivity gains. This concentration of AI capability in a single economy risks exacerbating existing inequalities between advanced and developing nations.

However, Georgieva’s emphasis on AI’s application layer-rather than merely its development-suggests opportunities for broader participation. Countries with strong capabilities in enterprise software, business process outsourcing, and operational integration, such as India, can leverage AI to enhance service delivery and create new value propositions. This perspective challenges the notion that AI benefits will concentrate exclusively in technology-leading nations, though it requires deliberate policy choices to realise this potential.

The Uncertainty Framework

Georgieva frequently describes the contemporary global environment as one where “uncertainty is the new normal.” This framing contextualises her AI analysis within a broader landscape of simultaneous transformations-geopolitical fragmentation, demographic shifts, climate change, and trade tensions all accelerating simultaneously. AI does not exist in isolation; it emerges as one force among many reshaping the global economy.

This multiplicity of transformations creates what Georgieva terms “more fog within which we operate.” Policymakers cannot assume that historical relationships between variables will hold. The interaction between AI-driven productivity gains, trade tensions, demographic decline in advanced economies, and climate-related resource constraints creates a genuinely novel economic environment. This is why Georgieva emphasises the need for international coordination, adaptive policy frameworks, and institutional flexibility.

Conclusion: The Productivity Imperative

Georgieva’s statement about AI and productivity growth reflects a conviction grounded in both rigorous analysis and institutional responsibility. The two-decade productivity drought has constrained growth, limited policy options, and contributed to the political instability and inequality that characterise contemporary democracies. AI offers a genuine opportunity to reverse this trajectory, but only if its benefits are deliberately distributed and its disruptions actively managed. The speed of AI’s development means that the window for shaping this outcome is narrow. Policymakers who treat AI as merely a technological phenomenon rather than as an economic and social challenge risk squandering the productivity gains Georgieva describes, converting opportunity into disruption.

References

1. https://time.com/collections/davos-2026/7339218/ai-trade-global-economy-kristalina-georgieva-imf/

2. https://www.youtube.com/watch?v=4ANV7yuaTuA

3. https://economictimes.com/news/india/clash-at-davos-why-india-refuses-to-be-a-second-tier-ai-power/articleshow/127012696.cms

read more
Term: Tensor Processing Unit (TPU)

Term: Tensor Processing Unit (TPU)

“A Tensor Processing Unit (TPU) is an application-specific integrated circuit (ASIC) custom-designed by Google to accelerate machine learning (ML) and artificial intelligence (AI) workloads, especially those involving neural networks.” – Tensor Processing Unit (TPU)

A Tensor Processing Unit (TPU) is an application-specific integrated circuit (ASIC) custom-designed by Google to accelerate machine learning (ML) and artificial intelligence (AI) workloads, particularly those involving neural networks and matrix multiplication operations.1,2,4,6

Core Architecture and Functionality

TPUs excel at high-throughput, parallel processing of mathematical tasks such as multiply-accumulate (MAC) operations, which form the backbone of neural network training and inference. Each TPU features a Matrix Multiply Unit (MXU)—a systolic array of arithmetic logic units (ALUs), typically configured as 128×128 or 256×256 grids—that performs thousands of MAC operations per clock cycle using formats like 8-bit integers, BFloat16, or floating-point arithmetic.1,2,5,9 Supporting components include a Vector Processing Unit (VPU) for non-linear activations (e.g., ReLU, sigmoid) and High Bandwidth Memory (HBM) to minimise data bottlenecks by enabling rapid data retrieval and storage.2,5

Unlike general-purpose CPUs or even GPUs, TPUs are purpose-built for ML models relying on matrix processing, large batch sizes, and extended training periods (e.g., weeks for convolutional neural networks), offering superior efficiency in power consumption and speed for tasks like image recognition, natural language processing, and generative AI.1,3,6 They integrate seamlessly with frameworks such as TensorFlow, JAX, and PyTorch, processing input data as vectors in parallel before outputting results to ML models.1,4

Key Applications and Deployment

  • Cloud Computing: TPUs power Google Cloud Platform (GCP) services for AI workloads, including chatbots, recommendation engines, speech synthesis, computer vision, and products like Google Search, Maps, Photos, and Gemini.1,2,3
  • Edge Computing: Suitable for real-time ML at data sources, such as IoT in factories or autonomous vehicles, where high-throughput matrix operations are needed.1
    TPUs support both training (e.g., model development) and inference (e.g., predictions on new data), with pods scaling to thousands of chips for massive workloads.6,7

Development History

Google developed TPUs internally from 2015 for TensorFlow-based neural networks, deploying them in data centres before releasing versions for third-party use via GCP in 2018.1,4 Evolution includes shifts in array sizes (e.g., v1: 256×256 on 8-bit integers; later versions: 128×128 on BFloat16; v6: back to 256×256) and proprietary interconnects for enhanced scalability.5,6

Best Related Strategy Theorist: Norman Foster Ramsey

The most pertinent strategy theorist linked to TPU development is Norman Foster Ramsey (1915–2011), a Nobel Prize-winning physicist whose foundational work on quantum computing architectures and coherent manipulation of quantum states directly influenced the parallel processing paradigms underpinning TPUs. Ramsey’s concepts of separated oscillatory fields—a technique for precisely controlling atomic transitions using microwave pulses separated in space and time—paved the way for systolic arrays and matrix-based computation in specialised hardware, which TPUs exemplify through their MXU grids for simultaneous MAC operations.5 This quantum-inspired parallelism optimises energy efficiency and throughput, mirroring Ramsey’s emphasis on minimising decoherence (data loss) in high-dimensional systems.

Biography and Relationship to the Term: Born in Washington, D.C., Ramsey earned his PhD from Columbia University in 1940 under I.I. Rabi, focusing on molecular beams and magnetic resonance. During World War II, he contributed to radar and atomic bomb research at MIT’s Radiation Laboratory. Post-war, as a Harvard professor (1947–1986), he pioneered the Ramsey method of separated oscillatory fields, earning the 1989 Nobel Prize in Physics for enabling atomic clocks and quantum computing primitives. His 1950s–1960s work on quantum state engineering informed ASIC designs for tensor operations; Google’s TPU team drew on these principles for weight-stationary systolic arrays, reducing data movement akin to Ramsey’s coherence preservation. Ramsey advised early quantum hardware initiatives at Harvard and Los Alamos, influencing strategists in custom silicon for AI acceleration. He lived to 96, authoring over 250 papers and mentoring figures in computational physics.1,5

References

1. https://www.techtarget.com/whatis/definition/tensor-processing-unit-TPU

2. https://builtin.com/articles/tensor-processing-unit-tpu

3. https://www.iterate.ai/ai-glossary/what-is-tpu-tensor-processing-unit

4. https://en.wikipedia.org/wiki/Tensor_Processing_Unit

5. https://blog.bytebytego.com/p/how-googles-tensor-processing-unit

6. https://cloud.google.com/tpu

7. https://docs.cloud.google.com/tpu/docs/intro-to-tpu

8. https://www.youtube.com/watch?v=GKQz4-esU5M

9. https://lightning.ai/docs/pytorch/1.6.2/accelerators/tpu.html

read more
Quote: Ryan Dahl

Quote: Ryan Dahl

“This has been said a thousand times before, but allow me to add my own voice: the era of humans writing code is over. Disturbing for those of us who identify as SWEs, but no less true. That’s not to say SWEs don’t have work to do, but writing syntax directly is not it.” – Ryan Dahl – Nodejs creator

Ryan Dahl’s candid declaration captures a pivotal moment in software engineering, where artificial intelligence tools like Claude and Codex are reshaping the craft of coding. As the creator of Node.js and co-founder of Deno, Dahl speaks from the front lines of innovation, challenging software engineers (SWEs) to adapt to a future where manual syntax writing fades into obsolescence.

Who is Ryan Dahl?

Ryan Dahl is a pioneering figure in JavaScript runtime environments. In 2009, while a graduate student at the University of California, Los Angeles (UCLA), he created Node.js, a revolutionary open-source, cross-platform runtime that brought JavaScript to server-side development. Node.js addressed key limitations of traditional server architectures by leveraging an event-driven, non-blocking I/O model, enabling scalable network applications. Its debut at the inaugural JSConf EU in 2009 sparked rapid adoption, powering giants like Netflix, Uber, and LinkedIn.1

By 2018, Dahl reflected critically on Node.js’s shortcomings for massive-scale servers, noting in interviews that alternatives like Go might suit such workloads better-a realisation that prompted his departure from heavy Node.js involvement.2 This introspection led to Deno’s launch in 2018, a modern runtime designed to fix Node.js pain points: it offers secure-by-default permissions, native TypeScript support, and bundled dependencies via URLs, eschewing Node’s npm-centric vulnerabilities. Today, as Deno’s CEO, Dahl continues advocating for JavaScript’s evolution, including efforts to challenge Oracle’s JavaScript trademark to free the term for generic use.1

Dahl’s career embodies pragmatic evolution. He views TypeScript-Microsoft’s typed superset of JavaScript-as the language’s future direction, predicting standards-level integration of types, though he respects Microsoft’s stewardship.1

Context of the Quote

Delivered via X (formerly Twitter), Dahl’s words respond to the explosive rise of AI coding assistants. Tools like Claude (Anthropic’s LLM) and Codex (OpenAI’s precursor to GPT models, powering GitHub Copilot) generate syntactically correct code from natural language prompts, rendering rote typing archaic. The quote acknowledges discomfort among SWEs-professionals who pride themselves on craftsmanship-yet insists the shift is inevitable. Dahl clarifies that engineering roles persist, evolving towards higher-level design, architecture, and oversight rather than syntax drudgery.

This aligns with Dahl’s history of bold pivots: from Node.js’s server-side breakthrough to Deno’s security-focused redesign, and now to AI’s paradigm shift. His voice carries weight amid 2020s AI hype, urging adaptation over denial.

Leading Theorists on AI and the Future of Coding

Dahl’s thesis echoes thinkers at the intersection of AI and software development:

  • Andrej Karpathy (ex-Tesla AI Director, OpenAI): In 2023, Karpathy declared ‘software 2.0’, where neural networks supplant traditional code, trained on data rather than hand-written logic. He predicts engineers will curate datasets and prompts, not lines of code.
  • Simon Willison (Datasette creator, LLM expert): Willison champions ‘vibe coding’-iterating via AI tools like Cursor or Aider-arguing syntax mastery becomes irrelevant as LLMs handle boilerplate flawlessly.
  • Swyx (Shawn Wang) (ex-Netflix, AI advocate): Popularised ‘Full-Stack AI Engineer’, a role blending prompting, evaluation, and integration skills over raw coding prowess.
  • Lex Fridman (MIT researcher, podcaster): Through dialogues with AI pioneers, Fridman explores how tools like Devin (Cognition Labs’ autonomous agent) could automate entire engineering workflows.

These voices build on earlier foundations: Alan Kay’s 1970s vision of personal computing democratised programming, now amplified by AI. Critics like Grady Booch warn of over-reliance, stressing human insight for complex systems, yet consensus grows that AI accelerates rote tasks, freeing creativity.

Implications for Software Engineering

Dahl’s provocation signals a renaissance: SWEs must master prompt engineering, AI evaluation, system design, and ethical oversight. Node.js’s legacy-empowering non-experts via JavaScript ubiquity-foreshadows AI’s democratisation. As Deno integrates AI-native features, Dahl positions himself at this frontier, inviting engineers to evolve or risk obsolescence.

 

References

1. https://redmonk.com/blog/2024/12/16/rmc-ryan-dahl-on-the-deno-v-oracle-petition/

2. https://news.ycombinator.com/item?id=15767713

 

read more
Term: Forward Deployed Engineer (FDE)

Term: Forward Deployed Engineer (FDE)

“An AI Forward Deployed Engineer (FDE) is a technical expert embedded directly within a client’s environment to implement, customise, and operationalize complex AI/ML products, acting as a bridge between core engineering and customer needs.” – Forward Deployed Engineer (FDE)

Forward Deployed Engineer (FDE)

A Forward Deployed Engineer (FDE) is a highly skilled technical specialist embedded directly within a client’s environment to implement, customise, deploy, and operationalise complex software or AI/ML products, serving as a critical bridge between core engineering teams and customer-specific needs.1,2,5 This hands-on, customer-facing role combines software engineering, solution architecture, and technical consulting to translate business workflows into production-ready solutions, often involving rapid prototyping, integrations with legacy systems (e.g., CRMs, ERPs, HRIS), and troubleshooting in real-world settings.1,2,3

Key Responsibilities

  • Collaborate directly with enterprise customers to understand workflows, scope use cases, and design tailored AI agent or GenAI solutions.1,3,5
  • Lead deployment, integration, and configuration in diverse environments (cloud, on-prem, hybrid), including APIs, OAuth, webhooks, and production-grade interfaces.1,2,4
  • Build end-to-end workflows, operationalise LLM/SLM-based systems (e.g., RAG, vector search, multi-agent orchestration), and iterate for scalability, performance, and user adoption.1,5,6
  • Act as a liaison to product/engineering teams, feeding back insights, proposing features, and influencing roadmaps while conducting workshops, audits, and go-lives.1,3,7
  • Debug live issues, document implementations, and ensure compliance with IT/security requirements like data residency and logging.1,2

Essential Skills and Qualifications

  • Technical Expertise: Proficiency in Python, Node.js, or Java; cloud platforms (AWS, Azure, GCP); REST APIs; and GenAI tools (e.g., LangChain, HuggingFace, DSPy).1,6
  • AI/ML Fluency: Experience with LLMs, agentic workflows, fine-tuning, Text2SQL, and evaluation/optimisation for production.5,6,7
  • Soft Skills: Strong communication for executive presentations, problem-solving in ambiguous settings, and willingness for international travel (e.g., US/Europe).1,2
  • Experience: Typically 10+ years in enterprise software, with exposure to domains like healthcare, finance, or customer service; startup or consulting background preferred.1,7

FDEs differ from traditional support or sales engineering roles by writing production code, owning outcomes like a “hands-on AI startup CTO,” and enabling scalable AI delivery in complex enterprises.2,5,7 In the AI era, they excel as architects of agentic operations, leveraging AI for diagnostics, automation, and pattern identification to accelerate value realisation.7

Best Related Strategy Theorist: Clayton Christensen

The concept of the Forward Deployed Engineer aligns most closely with Clayton Christensen (1947–2020), the Harvard Business School professor renowned for pioneering disruptive innovation theory, which emphasises how customer-embedded adaptation drives technology adoption and market disruption—mirroring the FDE’s role in customising complex AI products for real-world fit.2,7

Biography and Backstory: Born in Salt Lake City, Utah, Christensen earned a BA in economics from Brigham Young University, an MPhil from Oxford as a Rhodes Scholar, and a DBA from Harvard. After consulting at BCG and founding Innosight, he joined Harvard faculty in 1992, authoring seminal works like The Innovator’s Dilemma (1997), which argued that incumbents fail by ignoring “disruptive” technologies that initially underperform but evolve to dominate via iterative, customer-proximate improvements.8 His theories stemmed from studying disk drives and steel minimills, revealing how “listening to customers” in sustained innovation traps firms, while forward-deployed experimentation in niche contexts enables breakthroughs.

Relationship to FDE: Christensen’s framework directly informs the FDE model, popularised by Palantir (inspired by military “forward deployment”) and scaled in AI firms like Scale AI and Databricks.5,6 FDEs embody disruptive deployment: embedded in client environments, they prototype and iterate solutions (e.g., GenAI agents) that bypass headquarters silos, much like disruptors refine products through “jobs to be done” in ambiguous, high-stakes settings.2,5,7 Christensen advised Palantir-like enterprises on scaling via such roles, stressing that technical experts “forward-deployed” accelerate value by solving unspoken problems—echoing FDE skills in rapid problem identification and agentic orchestration.7 His later work on AI ethics and enterprise transformation (e.g., Competing Against Luck, 2016) underscores FDEs’ strategic pivot: turning customer feedback into product evolution, ensuring AI scales disruptively rather than generically.1,3

References

1. https://avaamo.ai/forward-deployed-engineer/

2. https://futurense.com/blog/fde-forward-deployed-engineers

3. https://theloops.io/career/forward-deployed-ai-engineer/

4. https://scale.com/careers/4593571005

5. https://jobs.lever.co/palantir/636fc05c-d348-4a06-be51-597cb9e07488

6. https://www.databricks.com/company/careers/professional-services-operations/ai-engineer—fde-forward-deployed-engineer-8024010002

7. https://www.rocketlane.com/blogs/forward-deployed-engineer

8. https://thomasotter.substack.com/p/wtf-is-a-forward-deployed-engineer

9. https://www.salesforce.com/blog/forward-deployed-engineer/

read more
Quote: Andre Karpathy

Quote: Andre Karpathy

“I’ve never felt this much behind as a programmer. The profession is being dramatically refactored as the bits contributed by the programmer are increasingly sparse and between. I have a sense that I could be 10X more powerful.” – Andre Karpathy – AI guru

Andre Karpathy, a pioneering AI researcher, captures the profound disruption AI is bringing to programming in this quote: “I’ve never felt this much behind as a programmer. The profession is being dramatically refactored as the bits contributed by the programmer are increasingly sparse and between. I have a sense that I could be 10X more powerful.”1,2 Delivered amid his reflections on AI’s rapid evolution, it underscores his personal sense of urgency as tools like large language models (LLMs) redefine developers’ roles from code writers to orchestrators of intelligent systems.2

Context of the Quote

Karpathy shared this introspection as part of his broader commentary on the programming profession’s transformation, likely tied to his June 17, 2025, keynote at AI Startup School in San Francisco titled “Software Is Changing (Again).”4 In it, he outlined Software 3.0—a paradigm where LLMs enable natural language as the primary programming interface, allowing AI to generate code, design systems, and even self-improve with minimal human input.1,4,5 The quote reflects his firsthand experience: traditional Software 1.0 (handwritten code) and Software 2.0 (neural networks trained on data) are giving way to 3.0, where programmers contribute “sparse” high-level guidance amid AI-generated code, evoking a feeling of both lag and untapped potential.1,2 He likens developers to “virtual managers” overseeing AI collaborators, focusing on architecture, decomposition, and ethics rather than syntax.2 This shift mirrors historical leaps—like from machine code to high-level languages—but accelerates via tools like GitHub Copilot, making elite programmers those who master prompt engineering and human-AI loops.2,4

Backstory on Andre Karpathy

Born in Slovakia and raised in Canada, Andrej Karpathy earned his PhD in computer vision at Stanford University, where he architected and led CS231n, the first deep learning course there, now one of Stanford’s most popular.3 A founding member of OpenAI, he advanced generative models and reinforcement learning. At Tesla (2017–2022), as Senior Director of AI, he led Autopilot vision, data labeling, neural net training, and deployment on custom inference chips, pushing toward Full Self-Driving.3,4 Briefly involved in Tesla Optimus, he left to found Eureka Labs, modernizing education with AI.3 Known as an “AI guru” for viral lectures like “The spelled-out intro to neural networks” and zero-to-hero LLM courses, Karpathy embodies the transition to Software 3.0, having deleted C++ code in favor of growing neural nets at Tesla.3,4

Leading Theorists on Software Paradigms and AI-Driven Programming

Karpathy’s framework builds on foundational ideas from deep learning pioneers. Key figures include:

  • Yann LeCun, Yoshua Bengio, and Geoffrey Hinton (the “Godfathers of AI”): Their 2010s work on deep neural networks birthed Software 2.0, where optimization on massive datasets replaces explicit programming. LeCun (Meta AI chief) pioneered convolutional nets; Bengio advanced sequence models; Hinton coined “backpropagation.” Their Turing Awards (2018) validated data-driven learning, enabling Karpathy’s Tesla-scale deployments.1

  • Ian Goodfellow (GAN inventor, 2014): His Generative Adversarial Networks prefigured Software 3.0’s generative capabilities, where AI creates code and data autonomously, blurring human-AI creation boundaries.1

  • Andrej Karpathy himself: Extends these into Software 3.0, emphasizing recursive self-improvement (AI writing AI) and “vibe coding” via natural language, as in his 2025 talks.1,4

  • Related influencers: Fei-Fei Li (Stanford, co-creator of ImageNet) scaled vision datasets fueling Software 2.0; Ilya Sutskever (OpenAI co-founder) drove LLMs like GPT, powering 3.0’s code synthesis.3

This evolution demands programmers adapt: curricula must prioritize AI collaboration over syntax, with humans excelling in judgment and oversight amid accelerating abstraction.1,2

References

1. https://inferencebysequoia.substack.com/p/andrej-karpathys-software-30-and

2. https://ytosko.dev/blog/andrej-karpathy-reflects-on-ais-impact-on-programming-profession

3. https://karpathy.ai

4. https://www.youtube.com/watch?v=LCEmiRjPEtQ

5. https://www.cio.com/article/4085335/the-future-of-programming-and-the-new-role-of-the-programmer-in-the-ai-era.html

read more
Term: Language Processing Unit (LPU)

Term: Language Processing Unit (LPU)

“A Language Processing Unit (LPU) is a specialized processor designed specifically to accelerate tasks related to natural language processing (NLP) and the inference of large language models (LLMs). It is a purpose-built chip engineered to handle the unique demands of language tasks.” – Language Processing Unit (LPU)

A Language Processing Unit (LPU) is a specialised processor purpose-built to accelerate natural language processing (NLP) tasks, particularly the inference phase of large language models (LLMs), by optimising sequential data handling and memory bandwidth utilisation.1,2,3,4

Core Definition and Purpose

LPUs address the unique computational demands of language-based AI workloads, which involve sequential processing of text data—such as tokenisation, attention mechanisms, sequence modelling, and context handling—rather than the parallel computations suited to graphics processing units (GPUs).1,4,6 Unlike general-purpose CPUs (flexible but slow for deep learning) or GPUs (excellent for matrix operations and training but inefficient for NLP inference), LPUs prioritise low-latency, high-throughput inference for pre-trained LLMs, achieving up to 10x greater energy efficiency and substantially faster speeds.3,6

Key differentiators include:

  • Sequential optimisation: Designed for transformer-based models where data flows predictably, unlike GPUs’ parallel “hub-and-spoke” model that incurs data paging overhead.1,3,4
  • Deterministic execution: Every clock cycle is predictable, eliminating resource contention for compute and bandwidth.3
  • High scalability: Supports seamless chip-to-chip data “conveyor belts” without routers, enabling near-perfect scaling in multi-device systems.2,3
Processor Key Strengths Key Weaknesses Best For
CPU Flexible, broadly compatible Limited parallelism; slow for LLMs General tasks
GPU Parallel matrix operations; training support Inefficient sequential NLP inference Broad AI workloads
LPU Sequential NLP optimisation; fast inference; efficient memory Emerging; limited beyond language tasks LLM inference

6

Architectural Features

LPUs typically employ a Tensor Streaming Processor (TSP) architecture, featuring software-controlled data pipelines that stream instructions and operands like an assembly line.1,3,7 Notable components include:

  • Local Memory Unit (LMU): Multi-bank register file for high-bandwidth scalar-vector access.2
  • Custom Instruction Set Architecture (ISA): Covers memory access (MEM), compute (COMP), networking (NET), and control instructions, with out-of-order execution for latency reduction.2
  • Expandable synchronisation links: Hide data sync overhead in distributed setups, yielding up to 1.75× speedup when doubling devices.2
  • No external memory like HBM; relies on on-chip SRAM (e.g., 230MB per chip) and massive core integration for billion-parameter models.2

Proprietary implementations, such as those in inference engines, maximise bandwidth utilisation (up to 90%) for high-speed text generation.1,2,3

Best Related Strategy Theorist: Jonathan Ross

The foremost theorist linked to the LPU is Jonathan Ross, founder and CEO of Groq, the pioneering company that invented and commercialised the LPU as a new processor category in 2016.1,3,4 Ross’s strategic vision reframed AI hardware strategy around deterministic, assembly-line architectures tailored to LLM inference bottlenecks—compute density and memory bandwidth—shifting from GPU dominance to purpose-built sequential processing.3,5,7

Biography and Relationship to LPU

Born in the United States, Ross earned a PhD in Applied Physics from Stanford University, where he specialised in machine learning acceleration and novel compute architectures. Early in his career, he co-founded Google Brain (now part of Google DeepMind) in 2011, leading hardware innovations like the Google Tensor Processing Unit (TPU)—the first ASIC for ML inference, which influenced hyperscale AI by prioritising efficiency over versatility.[3 implied via Groq context]

In 2016, Ross left Google to establish Groq (initially named Rebellious Computing, rebranded in 2017), driven by the insight that GPUs were suboptimal for the emerging era of LLMs requiring ultra-low-latency inference.3,7 He strategically positioned the LPU as a “new class of processor,” introducing the TSP in 2023 via GroqCloud™, which powers real-time AI applications at speeds unattainable by GPUs.1,3 Ross’s backstory reflects a theorist-practitioner approach: his TPU experience exposed GPU limitations in sequential workloads, leading to LPU’s conveyor-belt determinism and scalability—core to Groq’s market disruption, including partnerships for embedded AI.2,3 Under his leadership, Groq raised over $1 billion in funding by 2025, validating LPU as a strategic pivot in AI infrastructure.3,4 Ross continues to advocate LPU’s role in democratising fast, cost-effective inference, authoring key publications and demos that benchmark its superiority.3,7

References

1. https://datanorth.ai/blog/gpu-lpu-npu-architectures

2. https://arxiv.org/html/2408.07326v1

3. https://groq.com/blog/the-groq-lpu-explained

4. https://www.purestorage.com/knowledge/what-is-lpu.html

5. https://www.turingpost.com/p/fod41

6. https://www.geeksforgeeks.org/nlp/what-are-language-processing-units-lpus/

7. https://blog.codingconfessions.com/p/groq-lpu-design

read more
Quote: Marc Wilson – Global Advisors

Quote: Marc Wilson – Global Advisors

“Parents want to know what their kids should study in the age of AI – curiosity, agency, ability to learn and adapt, diligence, resilience, accountability, trust, ethics and teamwork define winners in the age of AI more than knowledge.” – Marc Wilson – Global Advisors

Over the last few years, I have spent thousands of hours inside AI systems – not as a spectator, but as someone trying to make them do real work. Not toy demos. Not slideware. I’m talking about actual consulting workflows: research, synthesis, modeling, data extraction, and client delivery.

What that experience strips away is the illusion that the future belongs to people who simply “know how to use AI.”

Every week there is a new tool, a new model, a new framework. What looked like a hard-won advantage six months ago is now either automated or irrelevant. Prompt engineering and tool-specific workflows are being collapsed into the models themselves. These are transitory skills. They matter in the moment, but they do not compound.

What does compound is agency.

Agency is the ability to look at a messy, underspecified problem and decide it will not beat you. It is the instinct to decompose a system, to experiment, and to push past failure when there is no clear map. AI does not remove the need for that; it amplifies it. The people who get the most from these systems are not the ones who know the “right” prompts – they are the ones who iterate until the system produces the required outcome.

In practice, that looks different from what most people imagine. The most effective practitioners don’t ask, “What prompt should I use?”

They ask, “How do I get this result?”

They iterate. They swap tools. They reframe the problem. They are not embarrassed by trial-and-error or a hallucination because they aren’t outsourcing responsibility to the machine. They own the output.

Parents ask what their children should study for the “age of AI.” The question is understandable, but it misses the mark. Knowledge has never been more abundant. The marginal value of knowing one more thing is collapsing. What is becoming scarce is the ability to turn knowledge into action.

That is the core of agency:

  • Curiosity to explore and continuously learn and adapt.

  • Diligence is care about the details.

  • Resilience in the face of failures and constant change.

  • Accountability to own the outcome.

  • Ethics that focus on humanity.

  • People who form trusted relationships.

These qualities are not “soft.” They are decisive.

Machines can write, code and reason at superhuman speed – the differentiator is not who has the most information – it is who takes responsibility for the outcome.

AI will reward the people who show up, take ownership and find a way through uncertainty. Everything else – including today’s fashionable technical skills – will be rewritten.

read more
Quote: Demis Hassabis- DeepMind co-founder, CEO

Quote: Demis Hassabis- DeepMind co-founder, CEO

“Actually, I think [China is] closer to the US frontier models than maybe we thought one or two years ago. Maybe they’re only a matter of months behind at this point.” – Demis Hassabis – DeepMind co-founder, CEO

Context of the Quote

In a CNBC Original podcast, The Tech Download, aired on 6 January 2026, Demis Hassabis, co-founder and CEO of Google DeepMind, offered a candid assessment of China’s AI capabilities. He stated that Chinese AI models are now just a matter of months behind leading US frontier models, a significant narrowing from perceptions one or two years prior1,3,5. Hassabis highlighted models from Chinese firms like DeepSeek, Alibaba, and Zhipu AI, which have delivered strong benchmark performances despite US chip export restrictions1,3,5.

However, he tempered optimism by questioning China’s capacity for true innovation, noting they have yet to produce breakthroughs like the transformer architecture that powers modern generative AI. ‘Inventing something is 100 times harder than replicating it,’ he emphasised, pointing to cultural and mindset challenges in fostering exploratory research1,4,5. This interview underscores ongoing US-China AI competition amid geopolitical tensions, including bans on advanced Nvidia chips, though approvals for models like the H200 offer limited relief2,5.

Who is Demis Hassabis?

Demis Hassabis is a British AI researcher, entrepreneur, and neuroscientist whose career bridges neuroscience, gaming, and artificial intelligence. Born in 1976 in London to a Greek Cypriot father and Chinese Singaporean mother, he displayed prodigious talent early, winning the Eurovision Young Musicians contest at age 13 and becoming a chess master by 131,4.

Hassabis co-founded DeepMind in 2010 with the audacious goal of achieving artificial general intelligence (AGI). His breakthrough came with AlphaGo in 2016, which defeated world Go champion Lee Sedol, demonstrating deep reinforcement learning’s power1,4. Google acquired DeepMind in 2014 for £400 million, and Hassabis now leads as CEO, overseeing models like Gemini, which recently topped AI benchmarks3,4.

In 2024, he shared the Nobel Prize in Chemistry with John Jumper and David Baker for AlphaFold2, which predicts protein structures with unprecedented accuracy, revolutionising biology1,4. Hassabis predicts AGI within 5-10 years, down from his initial 20-year estimate, and regrets Google’s slower commercialisation of innovations like the transformer and AlphaGo despite inventing ‘90% of the technology everyone uses today’1,4. DeepMind operates like a ‘modern-day Bell Labs,’ prioritising fundamental research5.

Leading Theorists and the Subject Matter: The AI Frontier and Innovation Race

The quote touches on frontier AI models – state-of-the-art large language models (LLMs) pushing performance limits – and the distinction between replication and invention. Key theorists shaping this field include:

  • Geoffrey Hinton, Yann LeCun, and Yoshua Bengio (‘Godfathers of AI’): Pioneered deep learning. Hinton, at Google (emeritus), advanced backpropagation and neural networks. LeCun (Meta) developed convolutional networks for vision. Bengio (Mila) focused on sequence modelling. Their work underpins transformers1,5.
  • Ilya Sutskever: OpenAI co-founder, key in GPT series and reinforcement learning from human feedback (RLHF). Left to found Safe Superintelligence Inc., emphasising AGI safety3.
  • Andrej Karpathy: Ex-OpenAI/Tesla, popularised transformers via tutorials; now at his own venture5.
  • The Transformer Architects: Vaswani et al. (Google, 2017) introduced the transformer in ‘Attention is All You Need,’ enabling parallel training and scaling laws that birthed ChatGPT and Gemini. Hassabis notes China’s lack of equivalents1,4,5.

China’s progress, via firms like DeepSeek (cost-efficient models on lesser chips) and giants Alibaba/Baidu/Tencent, shows engineering prowess but lags in paradigm shifts2,3,5. US leads in compute (Nvidia GPUs) and innovation ecosystems, though restrictions may spur domestic chips like Huawei’s2,3. Hassabis’ view challenges US underestimation, aligning with Nvidia’s Jensen Huang: America is ‘not far ahead’5.

This backdrop highlights AI’s dual nature: rapid catch-up via scaling compute/data, versus elusive invention requiring bold theory1,2.

References

1. https://en.sedaily.com/international/2026/01/16/deepmind-ceo-hassabis-china-may-catch-up-in-ai-but-true

2. https://intellectia.ai/news/stock/google-deepmind-ceo-claims-chinas-ai-is-just-months-behind

3. https://www.investing.com/news/stock-market-news/china-ai-models-only-months-behind-us-efforts-deepmind-ceo-tells-cnbc-4450966

4. https://biz.chosun.com/en/en-it/2026/01/16/IQH4RV54VVGJVGTSYHWSARHOEU/

5. https://timesofindia.indiatimes.com/technology/tech-news/google-deepmind-ceo-demis-hassabis-corrects-almost-everyone-in-america-on-chinas-ai-capability-they-are-not-/articleshow/126561720.cms

6. https://brief.bismarckanalysis.com/s/ai-2026

read more
Term: GPU

Term: GPU

“A Graphics Processing Unit (GPU) is a specialised processor designed for parallel computing tasks, excelling at handling thousands of threads simultaneously, unlike CPUs which prioritise sequential processing. It is widely used for AI.” – GPU

A Graphics Processing Unit (GPU) is a specialised electronic circuit designed to accelerate graphics rendering, image processing, and parallel mathematical computations by executing thousands of simpler operations simultaneously across numerous cores.1,2,4,6

Core Characteristics and Architecture

GPUs excel at parallel processing, dividing tasks into subsets handled concurrently by hundreds or thousands of smaller, specialised cores, in contrast to CPUs which prioritise sequential execution with fewer, more versatile cores.1,3,5,7 This architecture includes dedicated high-bandwidth memory (e.g., GDDR6) for rapid data access, enabling efficient handling of compute-intensive workloads like matrix multiplications essential for 3D graphics, video editing, and scientific simulations.2,5 Originally developed for rendering realistic 3D scenes in games and films, GPUs have evolved into programmable devices supporting general-purpose computing (GPGPU), where they process vector operations far faster than CPUs for suitable applications.1,6

Historical Evolution and Key Applications

The modern GPU emerged in the 1990s, with Nvidia’s GeForce 256 in 1999 marking the first chip branded as a GPU, transforming fixed-function graphics hardware into flexible processors capable of shaders and custom computations.1,6 Today, GPUs power:

  • Gaming and media: High-resolution rendering and video processing.4,7
  • AI and machine learning: Accelerating neural networks via parallel floating-point operations, outperforming CPUs by orders of magnitude.1,3,5
  • High-performance computing (HPC): Data centres, blockchain, and simulations.1,2

Unlike neural processing units (NPUs), which optimise for low-latency AI with brain-like efficiency, GPUs prioritise raw parallel throughput for graphics and broad compute tasks.1

Best Related Strategy Theorist: Jensen Huang

Jensen Huang, co-founder, president, and CEO of Nvidia Corporation, is the preeminent figure linking GPUs to strategic technological dominance, having pioneered their shift from graphics to AI infrastructure.1

Biography: Born in 1963 in Taiwan, Huang immigrated to the US as a child, earning a BS in electrical engineering from Oregon State University (1984) and an MS from Stanford (1992). In 1993, at age 30, he co-founded Nvidia with Chris Malachowsky and Curtis Priem using $40,000, initially targeting 3D graphics acceleration amid the PC gaming boom. Under his leadership, Nvidia released the GeForce 256 in 1999—the first GPU—revolutionising real-time rendering and establishing market leadership.1,6 Huang’s strategic foresight extended GPUs beyond gaming via CUDA (2006), a platform enabling GPGPU for general computing, unlocking AI applications like deep learning.2,6 By 2026, Nvidia’s GPUs dominate AI training (e.g., via H100/H200 chips), propelling its market cap beyond $3 trillion and Huang’s net worth over $100 billion, making him the world’s richest person at times. His “all-in” bets—pivoting to AI during crypto winters and data centre shifts—exemplify visionary strategy, blending hardware innovation with ecosystem control (e.g., cuDNN libraries).1,5 Huang’s relationship to GPUs is foundational: as Nvidia’s architect, he defined their parallel architecture, foreseeing AI utility decades ahead, positioning GPUs as the “new CPU” for the AI era.3

References

1. https://www.ibm.com/think/topics/gpu

2. https://aws.amazon.com/what-is/gpu/

3. https://kempnerinstitute.harvard.edu/news/graphics-processing-units-and-artificial-intelligence/

4. https://www.arm.com/glossary/gpus

5. https://www.min.io/learn/graphics-processing-units

6. https://en.wikipedia.org/wiki/Graphics_processing_unit

7. https://www.supermicro.com/en/glossary/gpu

8. https://www.intel.com/content/www/us/en/products/docs/processors/what-is-a-gpu.html

read more
Quote: Nate B Jones – AI News & Strategy Daily

Quote: Nate B Jones – AI News & Strategy Daily

“Execution capacity isn’t scarce anymore. Ten days, four people, and [Anthropic are] shipping 60 to 100 releases daily. Execution capacity is not the problem.” – Nate B Jones – AI News & Strategy Daily

Nate B Jones, a prominent voice in AI news and strategy, made this striking observation on 15 January 2026, highlighting how execution speed at leading AI firms like Anthropic has rendered traditional capacity constraints obsolete.

Context of the Quote

The quote originates from a discussion in AI News & Strategy Daily, capturing the blistering pace of development at Anthropic, the creators of the Claude AI models. Jones points to a specific instance where just four people, over ten days, facilitated 60 to 100 daily releases. This underscores a paradigm shift: in AI labs, small teams leveraging advanced tools now achieve output volumes that once required vast resources. The statement challenges the notion that scaling human execution remains a barrier, positioning it instead as a solved problem amid accelerating AI capabilities.1,4

Backstory on Nate B Jones

Nate B Jones is a key commentator on AI developments, known for his daily newsletter AI News & Strategy Daily. His insights dissect breakthroughs, timelines, and strategic implications in artificial intelligence. Jones frequently analyses outputs from major players like Anthropic, OpenAI, and others, providing data-driven commentary on progress towards artificial general intelligence (AGI). His work emphasises empirical evidence from releases, funding rounds, and capability benchmarks, making him a go-to source for professionals tracking the AI race. This quote, delivered via a YouTube discussion, exemplifies his focus on how AI is redefining productivity in software engineering and research.

Anthropic’s Blazing Execution Pace

Anthropic, founded in 2021 by former OpenAI executives including CEO Dario Amodei, has emerged as a frontrunner in safe AI systems. Backed by over $23 billion in funding-including major investments from Microsoft and Nvidia-the firm achieved a $5 billion revenue run rate by August 2025 and is projected to hit $9 billion annualised by year-end. Speculation surrounds a potential IPO as early as 2026, with valuations soaring to $300-350 billion amid a massive funding round.2

Internally, Anthropic’s engineers report transformative AI integration. A August 2025 survey of 132 staff revealed Claude enabling complex tasks with fewer human interventions: tool calls per transcript rose 116% to 21.2 consecutive actions, while human turns dropped 33% to 4.1 on average. This aligns directly with Jones’s claim of hyper-efficient shipping, where AI handles code generation, edits, and commands autonomously.4

Broader metrics from Anthropic’s January 2026 Economic Index show explosive Claude usage growth, with rapid diffusion despite uneven global adoption tied to GDP levels.5 Predictions from CEO Dario Amodei include AI writing 90% of code by mid-2025 (partially realised) and nearly all by March 2026, fuelling daily release cadences.1

Leading Theorists on AI Execution and Speed

  • Dario Amodei (Anthropic CEO): A pioneer in scalable AI oversight, Amodei forecasts powerful AI by early 2027, with systems operating at 10x-100x human speeds on multi-week tasks. His ‘Machines of Loving Grace’ essay outlines AGI timelines as early as 2026, driving Anthropic’s aggressive R&D.1
  • Jakob Nielsen (UX and AI Forecaster): Nielsen predicts AI will handle 39-hour human tasks by end-2026, with capability doubling every 4 months-from 3 seconds (GPT-2, 2019) to 5 hours (Claude Opus 4.5, late 2025). He highlights examples like AI designing infographics in under a minute, amplifying execution velocity.3
  • Redwood Research Analysts: Bloggers at Redwood detail Anthropic’s AGI bets, noting resource repurposing for millions of model instances and AI accelerating engineering 3x-10x by late 2026. They anticipate full R&D automation medians shifting to 2027-2029 based on milestones like multi-week task success.1

These theorists converge on a narrative of exponential acceleration: AI is not merely assisting but supplanting human bottlenecks in execution, code, and innovation. Jones’s quote encapsulates this consensus, signalling that in 2026, the real frontiers lie beyond mere deployment speed.

References

1. https://blog.redwoodresearch.org/p/whats-up-with-anthropic-predicting

2. https://forgeglobal.com/insights/anthropic-upcoming-ipo-news/

3. https://jakobnielsenphd.substack.com/p/2026-predictions

4. https://www.anthropic.com/research/how-ai-is-transforming-work-at-anthropic

5. https://www.anthropic.com/research/anthropic-economic-index-january-2026-report

6. https://kalshi.com/markets/kxclaude5/claude-5-released/kxclaude5-27

7. https://www.fiercehealthcare.com/ai-and-machine-learning/jpm26-anthropic-launches-claude-healthcare-targeting-health-systems-payers

read more
Quote: Nate B Jones – AI News & Strategy Daily

Quote: Nate B Jones – AI News & Strategy Daily

“Suddenly your risk is timidity. Your risk is lack of courage. The danger isn’t necessarily building the wrong thing, because you’ve got 50 shots [a year] to build the right thing. The danger is not building enough things toward a larger vision that is really transformative for the customer.” – Nate B Jones – AI News & Strategy Daily

This provocative statement emerged from Nate B. Jones’s AI News & Strategy Daily on 15 January 2026, amid accelerating AI advancements reshaping software development and business strategy. Jones challenges conventional risk management in an era where AI tools like Cursor enable engineers to ship code twice as fast, and product managers double productivity through prompt engineering. Execution has become ‘cheaper’, but Jones warns that speed alone breeds quality nightmares – security holes, probabilistic outputs demanding sustained QA, and technical debt from rapid prototyping.1,2

The quote reframes failure: with rapid iteration (50+ attempts yearly), building suboptimal products is survivable. True peril lies in hesitation – failing to generate volume towards a bold, customer-transforming vision. This aligns with Jones’s emphasis on ‘AI native’ approaches, transcending mere acceleration to orchestration, coordination, and human-AI symbiosis for compounding gains.3

Backstory on Nate B. Jones

Nate B. Jones is a leading AI strategist, content creator, and independent analyst whose platforms – including his Substack newsletter, personal site (natebjones.com), and YouTube channel AI News & Strategy Daily (127K subscribers) – deliver ‘deep analysis, actionable frameworks, zero hype’.2,7 He dissects real-world AI implementation, from prompt stacks enhancing workflows to predictions on 2026 breakthroughs like memory advances, agent UIs, continual learning, and recursive self-improvement.5,6

Jones’s work spotlights execution dynamics: automation avalanches make work cheaper, yet spawn trust deficits from ‘dirty’ AI code and jailbreaking needs.1 He advocates team ‘film review’ loops using AI rubrics for decision docs, specs, and risk articulation – turning human skills into scalable drills.3 Videos like ‘The AI Trick That Finally Made Me Better at My Job’ and ‘Debunking AI Myths’ showcase his practical ethos, proving AI’s innovative edge via breakthroughs like AlphaDev’s faster algorithms and AlphaFold’s protein atlas.3,4

Positioned as ‘the most cogent, sensible, and insightful AI resource’, Jones guides ventures towards genuine AI nativity, urging leaders to escape terminal-bound agents for task queues and human-AI coordination.2

Leading Theorists on AI Execution, Speed, and Transformative Vision

Jones’s ideas echo foundational thinkers in AI strategy and rapid iteration:

  • Eric Ries (Lean Startup): Pioneered ‘build-measure-learn’ loops, validating Jones’s ’50 shots’ tolerance for failure. Ries argued validated learning trumps perfect planning, mirroring AI’s cheap execution.1
  • Andrew Ng (AI Pioneer): Emphasises AI’s productivity multiplier but warns of overhype; his advocacy for ‘AI transformation’ aligns with Jones’s customer vision, as seen in AlphaFold’s impact.4
  • Tyler Cowen (Marginal Revolution): Referenced by Jones for pre-AI decision frameworks now supercharged by AI critique loops, enabling ‘athlete-like’ review at scale.3
  • Sam Altman (OpenAI): Drives agentic AI evolution (e.g., recursive self-improvement), fuelling Jones’s 2026 predictions on long-running agents and human attention focus.5
  • Demis Hassabis (DeepMind): AlphaDev and GNoME exemplify AI innovation beyond speed, proving machines discover novel algorithms – validating Jones’s debunking of ‘AI can’t innovate’.4

These theorists collectively underpin Jones’s thesis: in AI’s ‘automation avalanche’, courageously shipping volume towards transformative goals outpaces timid perfectionism.1

Implications for Leaders

Traditional Risk AI-Era Risk (per Jones)
Building the wrong thing Timidity and lack of volume
Slow, cautious execution Quality/security disasters from unchecked speed
Single-shot perfection 50+ iterations towards bold vision

Jones’s insight demands a paradigm shift: harness AI for fearless experimentation, sustained quality, and visionary scale.

References

1. https://natesnewsletter.substack.com/p/2026-sneak-peek-the-first-job-by-9ac

2. https://www.natebjones.com

3. https://www.youtube.com/watch?v=Td_q0sHm6HU

4. https://www.youtube.com/watch?v=isuzSmJkYlc

5. https://www.youtube.com/watch?v=pOb0pjXpn6Q

6. https://natesnewsletter.substack.com/p/my-prompt-stack-for-work-16-prompts

7. https://www.youtube.com/@NateBJones

read more

Download brochure

Introduction brochure

What we do, case studies and profiles of some of our amazing team.

Download

Our latest podcasts on Spotify

Sign up for our newsletters - free

Global Advisors | Quantified Strategy Consulting