Select Page

Global Advisors | Quantified Strategy Consulting

Link from bio
Quote: Andre Karpathy

Quote: Andre Karpathy

“I’ve never felt this much behind as a programmer. The profession is being dramatically refactored as the bits contributed by the programmer are increasingly sparse and between. I have a sense that I could be 10X more powerful.” – Andre Karpathy – AI guru

Andre Karpathy, a pioneering AI researcher, captures the profound disruption AI is bringing to programming in this quote: “I’ve never felt this much behind as a programmer. The profession is being dramatically refactored as the bits contributed by the programmer are increasingly sparse and between. I have a sense that I could be 10X more powerful.”1,2 Delivered amid his reflections on AI’s rapid evolution, it underscores his personal sense of urgency as tools like large language models (LLMs) redefine developers’ roles from code writers to orchestrators of intelligent systems.2

Context of the Quote

Karpathy shared this introspection as part of his broader commentary on the programming profession’s transformation, likely tied to his June 17, 2025, keynote at AI Startup School in San Francisco titled “Software Is Changing (Again).”4 In it, he outlined Software 3.0—a paradigm where LLMs enable natural language as the primary programming interface, allowing AI to generate code, design systems, and even self-improve with minimal human input.1,4,5 The quote reflects his firsthand experience: traditional Software 1.0 (handwritten code) and Software 2.0 (neural networks trained on data) are giving way to 3.0, where programmers contribute “sparse” high-level guidance amid AI-generated code, evoking a feeling of both lag and untapped potential.1,2 He likens developers to “virtual managers” overseeing AI collaborators, focusing on architecture, decomposition, and ethics rather than syntax.2 This shift mirrors historical leaps—like from machine code to high-level languages—but accelerates via tools like GitHub Copilot, making elite programmers those who master prompt engineering and human-AI loops.2,4

Backstory on Andre Karpathy

Born in Slovakia and raised in Canada, Andrej Karpathy earned his PhD in computer vision at Stanford University, where he architected and led CS231n, the first deep learning course there, now one of Stanford’s most popular.3 A founding member of OpenAI, he advanced generative models and reinforcement learning. At Tesla (2017–2022), as Senior Director of AI, he led Autopilot vision, data labeling, neural net training, and deployment on custom inference chips, pushing toward Full Self-Driving.3,4 Briefly involved in Tesla Optimus, he left to found Eureka Labs, modernizing education with AI.3 Known as an “AI guru” for viral lectures like “The spelled-out intro to neural networks” and zero-to-hero LLM courses, Karpathy embodies the transition to Software 3.0, having deleted C++ code in favor of growing neural nets at Tesla.3,4

Leading Theorists on Software Paradigms and AI-Driven Programming

Karpathy’s framework builds on foundational ideas from deep learning pioneers. Key figures include:

  • Yann LeCun, Yoshua Bengio, and Geoffrey Hinton (the “Godfathers of AI”): Their 2010s work on deep neural networks birthed Software 2.0, where optimization on massive datasets replaces explicit programming. LeCun (Meta AI chief) pioneered convolutional nets; Bengio advanced sequence models; Hinton coined “backpropagation.” Their Turing Awards (2018) validated data-driven learning, enabling Karpathy’s Tesla-scale deployments.1

  • Ian Goodfellow (GAN inventor, 2014): His Generative Adversarial Networks prefigured Software 3.0’s generative capabilities, where AI creates code and data autonomously, blurring human-AI creation boundaries.1

  • Andrej Karpathy himself: Extends these into Software 3.0, emphasizing recursive self-improvement (AI writing AI) and “vibe coding” via natural language, as in his 2025 talks.1,4

  • Related influencers: Fei-Fei Li (Stanford, co-creator of ImageNet) scaled vision datasets fueling Software 2.0; Ilya Sutskever (OpenAI co-founder) drove LLMs like GPT, powering 3.0’s code synthesis.3

This evolution demands programmers adapt: curricula must prioritize AI collaboration over syntax, with humans excelling in judgment and oversight amid accelerating abstraction.1,2

References

1. https://inferencebysequoia.substack.com/p/andrej-karpathys-software-30-and

2. https://ytosko.dev/blog/andrej-karpathy-reflects-on-ais-impact-on-programming-profession

3. https://karpathy.ai

4. https://www.youtube.com/watch?v=LCEmiRjPEtQ

5. https://www.cio.com/article/4085335/the-future-of-programming-and-the-new-role-of-the-programmer-in-the-ai-era.html

read more
Term: Davos

Term: Davos

“Davos refers to the annual, invitation-only meeting of global political, business, academic, and civil society leaders held every January in the Swiss Alpine town of Davos-Klosters. It acts as a premier, high-profile platform for discussing pressing global economic, social, and political issues.” – Davos

Davos represents far more than a simple annual conference; it embodies a transformative model of global governance and problem-solving that has evolved significantly since its inception. Held each January in the Swiss Alpine resort town of Davos-Klosters, this invitation-only gathering convenes over 2,500 leaders spanning business, government, civil society, academia, and media to address humanity’s most pressing challenges.1,7

The Evolution and Purpose of Davos

Founded in 1971 by German engineer Klaus Schwab as the European Management Symposium, Davos emerged from a singular vision: that businesses should serve all stakeholders-employees, suppliers, communities, and the broader society-rather than shareholders alone.1 This foundational concept, known as stakeholder theory, remains central to the World Economic Forum’s mission today.1 The organisation formalised this philosophy through the Davos Manifesto in 1973, which was substantially renewed in 2020 to address the challenges of the Fourth Industrial Revolution.1,3

The Forum’s evolution reflects a fundamental shift in how global problems are addressed. Rather than relying solely on traditional nation-state institutions established after the Second World War-such as the International Monetary Fund, World Bank, and United Nations-Davos pioneered what scholars term a “Networked Institution.”2 This model brings together independent parties from civil society, the private sector, government, and individual stakeholders who perceive shared global problems and coordinate their activities to make progress, rather than working competitively in isolation.2

Tangible Impact and Policy Outcomes

Davos has demonstrated concrete influence on global affairs. In 1988, Greece and Türkiye averted armed conflict through an agreement finalised at the meeting.1 The 1990s witnessed a historic handshake that helped end apartheid in South Africa, and the platform served as the venue for announcing the UN Global Compact, calling on companies to align operations with human rights principles.1 More recently, in 2023, the United States announced a new development fund programme at Davos, and global CEOs agreed to support a free trade agreement in Africa.1 The Forum also launched Gavi, the vaccine alliance, in 2000-an initiative that now helps vaccinate nearly half the world’s children and played a crucial role in delivering COVID-19 vaccines to vulnerable countries.6

The Davos Manifesto and Stakeholder Capitalism

The 2020 Davos Manifesto formally established that the World Economic Forum is guided by stakeholder capitalism, a concept positing that corporations should deliver value not only to shareholders but to all stakeholders, including employees, society, and the planet.3 This framework commits businesses to three interconnected responsibilities:

  • Acting as stewards of the environmental and material universe for future generations, protecting the biosphere and championing a circular, shared, and regenerative economy5
  • Responsibly managing near-term, medium-term, and long-term value creation in pursuit of sustainable shareholder returns that do not sacrifice the future for the present5
  • Fulfilling human and societal aspirations as part of the broader social system, measuring performance not only on shareholder returns but also on environmental, social, and governance objectives5

Contemporary Relevance and Structure

The World Economic Forum operates as an international not-for-profit organisation headquartered in Geneva, Switzerland, with formal institutional status granted by the Swiss government.2,3 Its mission is to improve the state of the world through public-private cooperation, guided by core values of integrity, impartiality, independence, respect, and excellence.8 The Forum addresses five interconnected global challenges: Growth, Geopolitics, Technology, People, and Planet.8

Davos functions as the touchstone event within the Forum’s year-round orchestration of leaders from civil society, business, and government.2 Beyond the annual meeting, the organisation maintains continuous engagement through year-round communities spanning industries, regions, and generations, transforming ideas into action through initiatives and dialogues.4 The 2026 meeting, themed “A Spirit Of Dialogue,” emphasises advancing cooperation to address global issues, exploring the impact of innovation and emerging technologies, and promoting inclusive, sustainable approaches to human capital development.7

Klaus Schwab: The Architect of Davos

Klaus Schwab (born 1938) stands as the visionary founder and defining intellectual force behind Davos and the World Economic Forum. A German engineer and economist educated at the University of Bern and Harvard Business School, Schwab possessed an unusual conviction: that business leaders bore responsibility not merely to shareholders but to society writ large. This belief, radical for the early 1970s, crystallised into the founding of the European Management Symposium in 1971.

Schwab’s relationship with Davos transcends institutional leadership; he fundamentally shaped its philosophical architecture. His stakeholder theory challenged the prevailing shareholder primacy model that dominated Western capitalism, proposing instead that corporations exist within complex ecosystems of interdependence. This vision proved prescient, gaining mainstream acceptance only decades later as environmental concerns, social inequality, and governance failures exposed the limitations of pure shareholder capitalism.

Beyond founding the Forum, Schwab authored influential works including “The Fourth Industrial Revolution” (2016), a concept he coined to describe the convergence of digital, biological, and physical technologies reshaping society.1 His intellectual contributions extended the Forum’s reach from a business conference into a comprehensive platform addressing geopolitical tensions, technological disruption, and societal transformation. Schwab’s personal diplomacy-his ability to convene adversaries and facilitate dialogue-became embedded in Davos’s culture, establishing it as a neutral space where competitors and rivals could engage constructively.

Schwab’s legacy reflects a particular European sensibility: the belief that enlightened capitalism, properly structured around stakeholder interests, could serve as a force for global stability and progress. Whether one views this as visionary or naïve, his influence on contemporary governance models and corporate responsibility frameworks remains substantial. The expansion of Davos from a modest gathering of European executives to a global institution addressing humanity’s most complex challenges represents perhaps the most tangible measure of Schwab’s impact on twenty-first-century global affairs.

References

1. https://www.weforum.org/stories/2024/12/davos-annual-meeting-everything-you-need-to-know/

2. https://www.weforum.org/stories/2016/01/the-meaning-of-davos/

3. https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-davos-and-the-world-economic-forum

4. https://www.weforum.org/about/who-we-are/

5. https://en.wikipedia.org/wiki/World_Economic_Forum

6. https://www.zurich.com/media/magazine/2022/what-is-davos-your-guide-to-the-world-economic-forums-annual-meeting

7. https://www.oliverwyman.com/our-expertise/events/world-economic-forum-davos.html

8. https://www.weforum.org/about/world-economic-forum/

read more
Term: Language Processing Unit (LPU)

Term: Language Processing Unit (LPU)

“A Language Processing Unit (LPU) is a specialized processor designed specifically to accelerate tasks related to natural language processing (NLP) and the inference of large language models (LLMs). It is a purpose-built chip engineered to handle the unique demands of language tasks.” – Language Processing Unit (LPU)

A Language Processing Unit (LPU) is a specialised processor purpose-built to accelerate natural language processing (NLP) tasks, particularly the inference phase of large language models (LLMs), by optimising sequential data handling and memory bandwidth utilisation.1,2,3,4

Core Definition and Purpose

LPUs address the unique computational demands of language-based AI workloads, which involve sequential processing of text data—such as tokenisation, attention mechanisms, sequence modelling, and context handling—rather than the parallel computations suited to graphics processing units (GPUs).1,4,6 Unlike general-purpose CPUs (flexible but slow for deep learning) or GPUs (excellent for matrix operations and training but inefficient for NLP inference), LPUs prioritise low-latency, high-throughput inference for pre-trained LLMs, achieving up to 10x greater energy efficiency and substantially faster speeds.3,6

Key differentiators include:

  • Sequential optimisation: Designed for transformer-based models where data flows predictably, unlike GPUs’ parallel “hub-and-spoke” model that incurs data paging overhead.1,3,4
  • Deterministic execution: Every clock cycle is predictable, eliminating resource contention for compute and bandwidth.3
  • High scalability: Supports seamless chip-to-chip data “conveyor belts” without routers, enabling near-perfect scaling in multi-device systems.2,3
Processor Key Strengths Key Weaknesses Best For
CPU Flexible, broadly compatible Limited parallelism; slow for LLMs General tasks
GPU Parallel matrix operations; training support Inefficient sequential NLP inference Broad AI workloads
LPU Sequential NLP optimisation; fast inference; efficient memory Emerging; limited beyond language tasks LLM inference

6

Architectural Features

LPUs typically employ a Tensor Streaming Processor (TSP) architecture, featuring software-controlled data pipelines that stream instructions and operands like an assembly line.1,3,7 Notable components include:

  • Local Memory Unit (LMU): Multi-bank register file for high-bandwidth scalar-vector access.2
  • Custom Instruction Set Architecture (ISA): Covers memory access (MEM), compute (COMP), networking (NET), and control instructions, with out-of-order execution for latency reduction.2
  • Expandable synchronisation links: Hide data sync overhead in distributed setups, yielding up to 1.75× speedup when doubling devices.2
  • No external memory like HBM; relies on on-chip SRAM (e.g., 230MB per chip) and massive core integration for billion-parameter models.2

Proprietary implementations, such as those in inference engines, maximise bandwidth utilisation (up to 90%) for high-speed text generation.1,2,3

Best Related Strategy Theorist: Jonathan Ross

The foremost theorist linked to the LPU is Jonathan Ross, founder and CEO of Groq, the pioneering company that invented and commercialised the LPU as a new processor category in 2016.1,3,4 Ross’s strategic vision reframed AI hardware strategy around deterministic, assembly-line architectures tailored to LLM inference bottlenecks—compute density and memory bandwidth—shifting from GPU dominance to purpose-built sequential processing.3,5,7

Biography and Relationship to LPU

Born in the United States, Ross earned a PhD in Applied Physics from Stanford University, where he specialised in machine learning acceleration and novel compute architectures. Early in his career, he co-founded Google Brain (now part of Google DeepMind) in 2011, leading hardware innovations like the Google Tensor Processing Unit (TPU)—the first ASIC for ML inference, which influenced hyperscale AI by prioritising efficiency over versatility.[3 implied via Groq context]

In 2016, Ross left Google to establish Groq (initially named Rebellious Computing, rebranded in 2017), driven by the insight that GPUs were suboptimal for the emerging era of LLMs requiring ultra-low-latency inference.3,7 He strategically positioned the LPU as a “new class of processor,” introducing the TSP in 2023 via GroqCloud™, which powers real-time AI applications at speeds unattainable by GPUs.1,3 Ross’s backstory reflects a theorist-practitioner approach: his TPU experience exposed GPU limitations in sequential workloads, leading to LPU’s conveyor-belt determinism and scalability—core to Groq’s market disruption, including partnerships for embedded AI.2,3 Under his leadership, Groq raised over $1 billion in funding by 2025, validating LPU as a strategic pivot in AI infrastructure.3,4 Ross continues to advocate LPU’s role in democratising fast, cost-effective inference, authoring key publications and demos that benchmark its superiority.3,7

References

1. https://datanorth.ai/blog/gpu-lpu-npu-architectures

2. https://arxiv.org/html/2408.07326v1

3. https://groq.com/blog/the-groq-lpu-explained

4. https://www.purestorage.com/knowledge/what-is-lpu.html

5. https://www.turingpost.com/p/fod41

6. https://www.geeksforgeeks.org/nlp/what-are-language-processing-units-lpus/

7. https://blog.codingconfessions.com/p/groq-lpu-design

read more
Quote: Marc Wilson – Global Advisors

Quote: Marc Wilson – Global Advisors

“Parents want to know what their kids should study in the age of AI – curiosity, agency, ability to learn and adapt, diligence, resilience, accountability, trust, ethics and teamwork define winners in the age of AI more than knowledge.” – Marc Wilson – Global Advisors

Over the last few years, I have spent thousands of hours inside AI systems – not as a spectator, but as someone trying to make them do real work. Not toy demos. Not slideware. I’m talking about actual consulting workflows: research, synthesis, modeling, data extraction, and client delivery.

What that experience strips away is the illusion that the future belongs to people who simply “know how to use AI.”

Every week there is a new tool, a new model, a new framework. What looked like a hard-won advantage six months ago is now either automated or irrelevant. Prompt engineering and tool-specific workflows are being collapsed into the models themselves. These are transitory skills. They matter in the moment, but they do not compound.

What does compound is agency.

Agency is the ability to look at a messy, underspecified problem and decide it will not beat you. It is the instinct to decompose a system, to experiment, and to push past failure when there is no clear map. AI does not remove the need for that; it amplifies it. The people who get the most from these systems are not the ones who know the “right” prompts – they are the ones who iterate until the system produces the required outcome.

In practice, that looks different from what most people imagine. The most effective practitioners don’t ask, “What prompt should I use?”

They ask, “How do I get this result?”

They iterate. They swap tools. They reframe the problem. They are not embarrassed by trial-and-error or a hallucination because they aren’t outsourcing responsibility to the machine. They own the output.

Parents ask what their children should study for the “age of AI.” The question is understandable, but it misses the mark. Knowledge has never been more abundant. The marginal value of knowing one more thing is collapsing. What is becoming scarce is the ability to turn knowledge into action.

That is the core of agency:

  • Curiosity to explore and continuously learn and adapt.

  • Diligence is care about the details.

  • Resilience in the face of failures and constant change.

  • Accountability to own the outcome.

  • Ethics that focus on humanity.

  • People who form trusted relationships.

These qualities are not “soft.” They are decisive.

Machines can write, code and reason at superhuman speed – the differentiator is not who has the most information – it is who takes responsibility for the outcome.

AI will reward the people who show up, take ownership and find a way through uncertainty. Everything else – including today’s fashionable technical skills – will be rewritten.

read more
Quote: Demis Hassabis- DeepMind co-founder, CEO

Quote: Demis Hassabis- DeepMind co-founder, CEO

“Actually, I think [China is] closer to the US frontier models than maybe we thought one or two years ago. Maybe they’re only a matter of months behind at this point.” – Demis Hassabis – DeepMind co-founder, CEO

Context of the Quote

In a CNBC Original podcast, The Tech Download, aired on 6 January 2026, Demis Hassabis, co-founder and CEO of Google DeepMind, offered a candid assessment of China’s AI capabilities. He stated that Chinese AI models are now just a matter of months behind leading US frontier models, a significant narrowing from perceptions one or two years prior1,3,5. Hassabis highlighted models from Chinese firms like DeepSeek, Alibaba, and Zhipu AI, which have delivered strong benchmark performances despite US chip export restrictions1,3,5.

However, he tempered optimism by questioning China’s capacity for true innovation, noting they have yet to produce breakthroughs like the transformer architecture that powers modern generative AI. ‘Inventing something is 100 times harder than replicating it,’ he emphasised, pointing to cultural and mindset challenges in fostering exploratory research1,4,5. This interview underscores ongoing US-China AI competition amid geopolitical tensions, including bans on advanced Nvidia chips, though approvals for models like the H200 offer limited relief2,5.

Who is Demis Hassabis?

Demis Hassabis is a British AI researcher, entrepreneur, and neuroscientist whose career bridges neuroscience, gaming, and artificial intelligence. Born in 1976 in London to a Greek Cypriot father and Chinese Singaporean mother, he displayed prodigious talent early, winning the Eurovision Young Musicians contest at age 13 and becoming a chess master by 131,4.

Hassabis co-founded DeepMind in 2010 with the audacious goal of achieving artificial general intelligence (AGI). His breakthrough came with AlphaGo in 2016, which defeated world Go champion Lee Sedol, demonstrating deep reinforcement learning’s power1,4. Google acquired DeepMind in 2014 for £400 million, and Hassabis now leads as CEO, overseeing models like Gemini, which recently topped AI benchmarks3,4.

In 2024, he shared the Nobel Prize in Chemistry with John Jumper and David Baker for AlphaFold2, which predicts protein structures with unprecedented accuracy, revolutionising biology1,4. Hassabis predicts AGI within 5-10 years, down from his initial 20-year estimate, and regrets Google’s slower commercialisation of innovations like the transformer and AlphaGo despite inventing ‘90% of the technology everyone uses today’1,4. DeepMind operates like a ‘modern-day Bell Labs,’ prioritising fundamental research5.

Leading Theorists and the Subject Matter: The AI Frontier and Innovation Race

The quote touches on frontier AI models – state-of-the-art large language models (LLMs) pushing performance limits – and the distinction between replication and invention. Key theorists shaping this field include:

  • Geoffrey Hinton, Yann LeCun, and Yoshua Bengio (‘Godfathers of AI’): Pioneered deep learning. Hinton, at Google (emeritus), advanced backpropagation and neural networks. LeCun (Meta) developed convolutional networks for vision. Bengio (Mila) focused on sequence modelling. Their work underpins transformers1,5.
  • Ilya Sutskever: OpenAI co-founder, key in GPT series and reinforcement learning from human feedback (RLHF). Left to found Safe Superintelligence Inc., emphasising AGI safety3.
  • Andrej Karpathy: Ex-OpenAI/Tesla, popularised transformers via tutorials; now at his own venture5.
  • The Transformer Architects: Vaswani et al. (Google, 2017) introduced the transformer in ‘Attention is All You Need,’ enabling parallel training and scaling laws that birthed ChatGPT and Gemini. Hassabis notes China’s lack of equivalents1,4,5.

China’s progress, via firms like DeepSeek (cost-efficient models on lesser chips) and giants Alibaba/Baidu/Tencent, shows engineering prowess but lags in paradigm shifts2,3,5. US leads in compute (Nvidia GPUs) and innovation ecosystems, though restrictions may spur domestic chips like Huawei’s2,3. Hassabis’ view challenges US underestimation, aligning with Nvidia’s Jensen Huang: America is ‘not far ahead’5.

This backdrop highlights AI’s dual nature: rapid catch-up via scaling compute/data, versus elusive invention requiring bold theory1,2.

References

1. https://en.sedaily.com/international/2026/01/16/deepmind-ceo-hassabis-china-may-catch-up-in-ai-but-true

2. https://intellectia.ai/news/stock/google-deepmind-ceo-claims-chinas-ai-is-just-months-behind

3. https://www.investing.com/news/stock-market-news/china-ai-models-only-months-behind-us-efforts-deepmind-ceo-tells-cnbc-4450966

4. https://biz.chosun.com/en/en-it/2026/01/16/IQH4RV54VVGJVGTSYHWSARHOEU/

5. https://timesofindia.indiatimes.com/technology/tech-news/google-deepmind-ceo-demis-hassabis-corrects-almost-everyone-in-america-on-chinas-ai-capability-they-are-not-/articleshow/126561720.cms

6. https://brief.bismarckanalysis.com/s/ai-2026

read more
Term: GPU

Term: GPU

“A Graphics Processing Unit (GPU) is a specialised processor designed for parallel computing tasks, excelling at handling thousands of threads simultaneously, unlike CPUs which prioritise sequential processing. It is widely used for AI.” – GPU

A Graphics Processing Unit (GPU) is a specialised electronic circuit designed to accelerate graphics rendering, image processing, and parallel mathematical computations by executing thousands of simpler operations simultaneously across numerous cores.1,2,4,6

Core Characteristics and Architecture

GPUs excel at parallel processing, dividing tasks into subsets handled concurrently by hundreds or thousands of smaller, specialised cores, in contrast to CPUs which prioritise sequential execution with fewer, more versatile cores.1,3,5,7 This architecture includes dedicated high-bandwidth memory (e.g., GDDR6) for rapid data access, enabling efficient handling of compute-intensive workloads like matrix multiplications essential for 3D graphics, video editing, and scientific simulations.2,5 Originally developed for rendering realistic 3D scenes in games and films, GPUs have evolved into programmable devices supporting general-purpose computing (GPGPU), where they process vector operations far faster than CPUs for suitable applications.1,6

Historical Evolution and Key Applications

The modern GPU emerged in the 1990s, with Nvidia’s GeForce 256 in 1999 marking the first chip branded as a GPU, transforming fixed-function graphics hardware into flexible processors capable of shaders and custom computations.1,6 Today, GPUs power:

  • Gaming and media: High-resolution rendering and video processing.4,7
  • AI and machine learning: Accelerating neural networks via parallel floating-point operations, outperforming CPUs by orders of magnitude.1,3,5
  • High-performance computing (HPC): Data centres, blockchain, and simulations.1,2

Unlike neural processing units (NPUs), which optimise for low-latency AI with brain-like efficiency, GPUs prioritise raw parallel throughput for graphics and broad compute tasks.1

Best Related Strategy Theorist: Jensen Huang

Jensen Huang, co-founder, president, and CEO of Nvidia Corporation, is the preeminent figure linking GPUs to strategic technological dominance, having pioneered their shift from graphics to AI infrastructure.1

Biography: Born in 1963 in Taiwan, Huang immigrated to the US as a child, earning a BS in electrical engineering from Oregon State University (1984) and an MS from Stanford (1992). In 1993, at age 30, he co-founded Nvidia with Chris Malachowsky and Curtis Priem using $40,000, initially targeting 3D graphics acceleration amid the PC gaming boom. Under his leadership, Nvidia released the GeForce 256 in 1999—the first GPU—revolutionising real-time rendering and establishing market leadership.1,6 Huang’s strategic foresight extended GPUs beyond gaming via CUDA (2006), a platform enabling GPGPU for general computing, unlocking AI applications like deep learning.2,6 By 2026, Nvidia’s GPUs dominate AI training (e.g., via H100/H200 chips), propelling its market cap beyond $3 trillion and Huang’s net worth over $100 billion, making him the world’s richest person at times. His “all-in” bets—pivoting to AI during crypto winters and data centre shifts—exemplify visionary strategy, blending hardware innovation with ecosystem control (e.g., cuDNN libraries).1,5 Huang’s relationship to GPUs is foundational: as Nvidia’s architect, he defined their parallel architecture, foreseeing AI utility decades ahead, positioning GPUs as the “new CPU” for the AI era.3

References

1. https://www.ibm.com/think/topics/gpu

2. https://aws.amazon.com/what-is/gpu/

3. https://kempnerinstitute.harvard.edu/news/graphics-processing-units-and-artificial-intelligence/

4. https://www.arm.com/glossary/gpus

5. https://www.min.io/learn/graphics-processing-units

6. https://en.wikipedia.org/wiki/Graphics_processing_unit

7. https://www.supermicro.com/en/glossary/gpu

8. https://www.intel.com/content/www/us/en/products/docs/processors/what-is-a-gpu.html

read more
Quote: Nate B Jones – AI News & Strategy Daily

Quote: Nate B Jones – AI News & Strategy Daily

“Execution capacity isn’t scarce anymore. Ten days, four people, and [Anthropic are] shipping 60 to 100 releases daily. Execution capacity is not the problem.” – Nate B Jones – AI News & Strategy Daily

Nate B Jones, a prominent voice in AI news and strategy, made this striking observation on 15 January 2026, highlighting how execution speed at leading AI firms like Anthropic has rendered traditional capacity constraints obsolete.

Context of the Quote

The quote originates from a discussion in AI News & Strategy Daily, capturing the blistering pace of development at Anthropic, the creators of the Claude AI models. Jones points to a specific instance where just four people, over ten days, facilitated 60 to 100 daily releases. This underscores a paradigm shift: in AI labs, small teams leveraging advanced tools now achieve output volumes that once required vast resources. The statement challenges the notion that scaling human execution remains a barrier, positioning it instead as a solved problem amid accelerating AI capabilities.1,4

Backstory on Nate B Jones

Nate B Jones is a key commentator on AI developments, known for his daily newsletter AI News & Strategy Daily. His insights dissect breakthroughs, timelines, and strategic implications in artificial intelligence. Jones frequently analyses outputs from major players like Anthropic, OpenAI, and others, providing data-driven commentary on progress towards artificial general intelligence (AGI). His work emphasises empirical evidence from releases, funding rounds, and capability benchmarks, making him a go-to source for professionals tracking the AI race. This quote, delivered via a YouTube discussion, exemplifies his focus on how AI is redefining productivity in software engineering and research.

Anthropic’s Blazing Execution Pace

Anthropic, founded in 2021 by former OpenAI executives including CEO Dario Amodei, has emerged as a frontrunner in safe AI systems. Backed by over $23 billion in funding-including major investments from Microsoft and Nvidia-the firm achieved a $5 billion revenue run rate by August 2025 and is projected to hit $9 billion annualised by year-end. Speculation surrounds a potential IPO as early as 2026, with valuations soaring to $300-350 billion amid a massive funding round.2

Internally, Anthropic’s engineers report transformative AI integration. A August 2025 survey of 132 staff revealed Claude enabling complex tasks with fewer human interventions: tool calls per transcript rose 116% to 21.2 consecutive actions, while human turns dropped 33% to 4.1 on average. This aligns directly with Jones’s claim of hyper-efficient shipping, where AI handles code generation, edits, and commands autonomously.4

Broader metrics from Anthropic’s January 2026 Economic Index show explosive Claude usage growth, with rapid diffusion despite uneven global adoption tied to GDP levels.5 Predictions from CEO Dario Amodei include AI writing 90% of code by mid-2025 (partially realised) and nearly all by March 2026, fuelling daily release cadences.1

Leading Theorists on AI Execution and Speed

  • Dario Amodei (Anthropic CEO): A pioneer in scalable AI oversight, Amodei forecasts powerful AI by early 2027, with systems operating at 10x-100x human speeds on multi-week tasks. His ‘Machines of Loving Grace’ essay outlines AGI timelines as early as 2026, driving Anthropic’s aggressive R&D.1
  • Jakob Nielsen (UX and AI Forecaster): Nielsen predicts AI will handle 39-hour human tasks by end-2026, with capability doubling every 4 months-from 3 seconds (GPT-2, 2019) to 5 hours (Claude Opus 4.5, late 2025). He highlights examples like AI designing infographics in under a minute, amplifying execution velocity.3
  • Redwood Research Analysts: Bloggers at Redwood detail Anthropic’s AGI bets, noting resource repurposing for millions of model instances and AI accelerating engineering 3x-10x by late 2026. They anticipate full R&D automation medians shifting to 2027-2029 based on milestones like multi-week task success.1

These theorists converge on a narrative of exponential acceleration: AI is not merely assisting but supplanting human bottlenecks in execution, code, and innovation. Jones’s quote encapsulates this consensus, signalling that in 2026, the real frontiers lie beyond mere deployment speed.

References

1. https://blog.redwoodresearch.org/p/whats-up-with-anthropic-predicting

2. https://forgeglobal.com/insights/anthropic-upcoming-ipo-news/

3. https://jakobnielsenphd.substack.com/p/2026-predictions

4. https://www.anthropic.com/research/how-ai-is-transforming-work-at-anthropic

5. https://www.anthropic.com/research/anthropic-economic-index-january-2026-report

6. https://kalshi.com/markets/kxclaude5/claude-5-released/kxclaude5-27

7. https://www.fiercehealthcare.com/ai-and-machine-learning/jpm26-anthropic-launches-claude-healthcare-targeting-health-systems-payers

read more
Term: K-shaped economy

Term: K-shaped economy

“A “K-shaped economy” describes a recovery or economic state where different segments of the population, industries, or wealth levels diverge drastically, resembling the letter ‘K’ on a graph: one part shoots up (wealthy, tech, capital owners), while another stagnates.” – K-shaped economy –

A K-shaped economy describes an uneven economic recovery or state following a downturn, where different segments—such as high-income earners, tech sectors, large corporations, and asset owners—experience strong growth (the upward arm of the ‘K’), while low-income groups, small businesses, low-skilled workers, younger generations, and debt-burdened households stagnate or decline (the downward arm).1,2,3,4

Key Characteristics

This divergence manifests across multiple dimensions:

  • Income and wealth levels: Higher-income individuals (top 10-20%) drive over 50% of consumption, benefiting from rising asset prices (e.g., stocks, real estate), while lower-income households face stagnating wages, unemployment, and delinquencies.3,4,6,7
  • Industries and sectors: Tech giants (e.g., ‘Magnificent 7’), AI infrastructure, and video conferencing boom, whereas tourism, small businesses, and labour-intensive sectors struggle due to high borrowing costs and weak demand.2,5,8
  • Generational and geographic splits: Younger consumers with debt face financial strain, contrasting with older, wealthier groups; urban tech hubs thrive while others lag.1,3
  • Policy influences: Post-2008 quantitative easing and pandemic fiscal measures favoured asset owners over broad growth, exacerbating inequality; central banks like the Federal Reserve face challenges from misleading unemployment data and uneven inflation.3,5

The pattern, prominent after the COVID-19 recession, contrasts with V-shaped (swift, even rebound) or U-shaped (gradual) recoveries, complicating stimulus efforts.2,4

Historical Context and Examples

  • Originated in discussions during the 2020 pandemic, popularised on social media and by analysts like Lisa D. Cook (Federal Reserve Governor).4
  • Reinforced by events like the 2008 financial crisis, where liquidity flooded assets without proportional wage growth.5
  • In 2025, it persists with AI-driven stock gains for the wealthy, minimal job creation for others, and corporate resilience (e.g., fixed-rate debt for S&P 500 firms vs. floating-rate pain for small businesses).1,5,8

Best Related Strategy Theorist: Joseph Schumpeter

The most apt theorist linked to the K-shaped economy is Joseph Schumpeter (1883–1950), whose concept of creative destruction directly underpins one key mechanism: recessions enable new industries and technologies to supplant outdated ones, fostering divergent recoveries.2

Biography

Born in Triesch, Moravia (now Czech Republic), Schumpeter studied law and economics in Vienna, earning a doctorate in 1906. He taught at universities in Czernowitz, Graz, and Bonn, becoming Austria’s finance minister briefly in 1919 amid post-World War I turmoil. Exiled after the Nazis annexed Austria, he joined Harvard University in 1932, where he wrote seminal works until retiring in 1949. A polymath influenced by Marx, Walras, and Weber, Schumpeter predicted capitalism’s self-undermining tendencies through innovation and bureaucracy.2

Relationship to the Term

Schumpeter argued that capitalism thrives via creative destruction—the “perennial gale” where entrepreneurs innovate, destroying old structures (e.g., tourism during COVID) and birthing new ones (e.g., video conferencing, AI).2 In a K-shaped context, this explains why tech and capital-intensive sectors surge while legacy industries falter, amplified by policies favouring winners. Unlike uniform recoveries, his framework predicts inherent bifurcation, as seen post-2008 and pandemics, where asset markets outpace labour markets—echoing modern analyses of uneven growth.2,5 Schumpeter’s prescience positions him as the foundational strategist for navigating such divides through innovation policy.

References

1. https://www.equifax.com/business/blog/-/insight/article/the-k-shaped-economy-what-it-means-in-2025-and-how-we-got-here/

2. https://corporatefinanceinstitute.com/resources/economics/k-shaped-recovery/

3. https://am.vontobel.com/en/insights/k-shaped-economy-presents-challenges-for-the-federal-reserve

4. https://finance-commerce.com/2025/12/k-shaped-economy-inequality-us/

5. https://www.pinebridge.com/en/insights/investment-strategy-insights-reflexivity-and-the-k-shaped-economy

6. https://www.alliancebernstein.com/corporate/en/insights/economic-perspectives/the-k-shaped-economy.html

7. https://www.mellon.com/insights/insights-articles/the-k-shaped-drift.html

8. https://www.morganstanley.com/insights/articles/k-shaped-economy-investor-guide-2025

read more
Quote: Nate B Jones – AI News & Strategy Daily

Quote: Nate B Jones – AI News & Strategy Daily

“Suddenly your risk is timidity. Your risk is lack of courage. The danger isn’t necessarily building the wrong thing, because you’ve got 50 shots [a year] to build the right thing. The danger is not building enough things toward a larger vision that is really transformative for the customer.” – Nate B Jones – AI News & Strategy Daily

This provocative statement emerged from Nate B. Jones’s AI News & Strategy Daily on 15 January 2026, amid accelerating AI advancements reshaping software development and business strategy. Jones challenges conventional risk management in an era where AI tools like Cursor enable engineers to ship code twice as fast, and product managers double productivity through prompt engineering. Execution has become ‘cheaper’, but Jones warns that speed alone breeds quality nightmares – security holes, probabilistic outputs demanding sustained QA, and technical debt from rapid prototyping.1,2

The quote reframes failure: with rapid iteration (50+ attempts yearly), building suboptimal products is survivable. True peril lies in hesitation – failing to generate volume towards a bold, customer-transforming vision. This aligns with Jones’s emphasis on ‘AI native’ approaches, transcending mere acceleration to orchestration, coordination, and human-AI symbiosis for compounding gains.3

Backstory on Nate B. Jones

Nate B. Jones is a leading AI strategist, content creator, and independent analyst whose platforms – including his Substack newsletter, personal site (natebjones.com), and YouTube channel AI News & Strategy Daily (127K subscribers) – deliver ‘deep analysis, actionable frameworks, zero hype’.2,7 He dissects real-world AI implementation, from prompt stacks enhancing workflows to predictions on 2026 breakthroughs like memory advances, agent UIs, continual learning, and recursive self-improvement.5,6

Jones’s work spotlights execution dynamics: automation avalanches make work cheaper, yet spawn trust deficits from ‘dirty’ AI code and jailbreaking needs.1 He advocates team ‘film review’ loops using AI rubrics for decision docs, specs, and risk articulation – turning human skills into scalable drills.3 Videos like ‘The AI Trick That Finally Made Me Better at My Job’ and ‘Debunking AI Myths’ showcase his practical ethos, proving AI’s innovative edge via breakthroughs like AlphaDev’s faster algorithms and AlphaFold’s protein atlas.3,4

Positioned as ‘the most cogent, sensible, and insightful AI resource’, Jones guides ventures towards genuine AI nativity, urging leaders to escape terminal-bound agents for task queues and human-AI coordination.2

Leading Theorists on AI Execution, Speed, and Transformative Vision

Jones’s ideas echo foundational thinkers in AI strategy and rapid iteration:

  • Eric Ries (Lean Startup): Pioneered ‘build-measure-learn’ loops, validating Jones’s ’50 shots’ tolerance for failure. Ries argued validated learning trumps perfect planning, mirroring AI’s cheap execution.1
  • Andrew Ng (AI Pioneer): Emphasises AI’s productivity multiplier but warns of overhype; his advocacy for ‘AI transformation’ aligns with Jones’s customer vision, as seen in AlphaFold’s impact.4
  • Tyler Cowen (Marginal Revolution): Referenced by Jones for pre-AI decision frameworks now supercharged by AI critique loops, enabling ‘athlete-like’ review at scale.3
  • Sam Altman (OpenAI): Drives agentic AI evolution (e.g., recursive self-improvement), fuelling Jones’s 2026 predictions on long-running agents and human attention focus.5
  • Demis Hassabis (DeepMind): AlphaDev and GNoME exemplify AI innovation beyond speed, proving machines discover novel algorithms – validating Jones’s debunking of ‘AI can’t innovate’.4

These theorists collectively underpin Jones’s thesis: in AI’s ‘automation avalanche’, courageously shipping volume towards transformative goals outpaces timid perfectionism.1

Implications for Leaders

Traditional Risk AI-Era Risk (per Jones)
Building the wrong thing Timidity and lack of volume
Slow, cautious execution Quality/security disasters from unchecked speed
Single-shot perfection 50+ iterations towards bold vision

Jones’s insight demands a paradigm shift: harness AI for fearless experimentation, sustained quality, and visionary scale.

References

1. https://natesnewsletter.substack.com/p/2026-sneak-peek-the-first-job-by-9ac

2. https://www.natebjones.com

3. https://www.youtube.com/watch?v=Td_q0sHm6HU

4. https://www.youtube.com/watch?v=isuzSmJkYlc

5. https://www.youtube.com/watch?v=pOb0pjXpn6Q

6. https://natesnewsletter.substack.com/p/my-prompt-stack-for-work-16-prompts

7. https://www.youtube.com/@NateBJones

read more
Term: Strategy

Term: Strategy

“Strategy is the art of radical selection, where you identify the “vital few” forces – the 20% of activities, products, or customers that generate 80% of your value – and anchor them in a unique and valuable position that is difficult for rivals to imitate.” – Strategy

Strategy is the art of radical selection, entailing the identification and prioritisation of the “vital few” forces—typically the 20% of activities, products, or customers that deliver 80% of value—and embedding them within a unique, valuable position that rivals struggle to replicate.

This definition draws on the Pareto principle (or 80/20 rule), which posits that a minority of inputs generates the majority of outputs, applied strategically to focus resources for competitive advantage. Radical selection demands ruthless prioritisation, rejecting marginal efforts to create imitable barriers such as proprietary processes, network effects, or brand loyalty. In practice, it involves auditing operations to isolate high-impact elements, then aligning the organisation around them—eschewing diversification for concentrated excellence. For instance, firms might discontinue underperforming product lines or customer segments to double down on core strengths, fostering sustainable differentiation amid competition.3,5

Key Elements of Radical Selection

  • Identification of the “Vital Few”: Analyse data to pinpoint the 20% driving 80% of revenue, profit, or growth; this echoes exploration in radical innovation, targeting novel opportunities over incremental gains.3
  • Anchoring in a Unique Position: Secure these forces in a defensible niche, leveraging creativity and risk acceptance inherent to strategic art, where choices fuse power with imagination to outmanoeuvre rivals.5
  • Difficulty to Imitate: Build moats through repetition with deviation—reconfiguring conventions internally to resist replication, akin to disidentification strategies that transform from within.1

Best Related Strategy Theorist: Richard Koch

Richard Koch, a pre-eminent proponent of the 80/20 principle in strategy, provides the foundational intellectual backbone for this concept of radical selection. His seminal work, The 80/20 Principle: The Secret to Achieving More with Less (1997, updated editions since), explicitly frames strategy as exploiting the “vital few”—the disproportionate 20% of factors yielding 80% of results—to achieve outsized success.

Biography and Backstory

Born in 1950 in London, Koch graduated from Oxford University with a degree in Philosophy, Politics, and Economics, later earning an MBA from Harvard Business School. He began his career at Bain & Company (1978–1980), rising swiftly in management consulting, then co-founded L.E.K. Consulting in 1983, where he specialised in corporate strategy and turnarounds. Koch advised blue-chip firms on radical pruning—divesting non-core assets to focus on high-yield segments—drawing early insights into Pareto imbalances from client data showing most profits stemmed from few products or customers.

In the 1990s, as an independent investor and author, Koch applied these lessons to his own ventures, achieving billionaire status through stakes in firms like Filofax (which he revitalised via 80/20 focus) and Betfair (early investor). His 80/20 philosophy evolved from Vilfredo Pareto’s 1896 observation of wealth distribution (80% owned by 20%) and Joseph Juran’s quality management adaptations, but Koch radicalised it for strategy. He argued that businesses thrive by systematically ignoring the trivial many, selecting “star” activities for exponential growth—a direct precursor to the query’s definition.

Koch’s relationship to radical selection is intimate: he popularised it as a strategic art form, blending empirical analysis with bold choice. In Living the 80/20 Way (2004) and The 80/20 Manager (2007), he extends it to personal and corporate realms, warning against “spread-thin” mediocrity. Critics note its simplicity risks oversimplification, yet its prescience aligns with modern lean strategies; Koch remains active, mentoring via Koch Education.3,5

References

1. https://direct.mit.edu/artm/article/10/3/8/109489/What-is-Radical

2. https://dariollinares.substack.com/p/the-art-of-radical-thinking?selection=863e7a98-7166-4689-9e3c-6434f064c055

3. https://www.timreview.ca/article/1425

4. https://selvajournal.org/article/ideology-strategy-aesthetics/

5. https://theforge.defence.gov.au/sites/default/files/2024-11/On%20Strategic%20Art%20-%20A%20Guide%20to%20Strategic%20Thinking%20and%20the%20ASFF%20(Electronic%20Version%201-1).pdf

6. https://ellengallery.concordia.ca/wp-content/uploads/2021/08/leonard-Bina-Ellen-Art-Gallery-MUNOZ-Radical-Form.pdf

7. https://art21.org/read/radical-art-in-a-conservative-school/

8. https://parsejournal.com/article/radical-softness/

read more
Quote: Nate B Jones

Quote: Nate B Jones

“Anthropic shipping ‘Co-Work’ as a full product feature. It was built in 10 days with just four people. It was written entirely in Claude Code. And Claude Code, mind you, is an entire product that is less than a year old… The Anthropic team is evolving as they go.” – Nate B Jones – AI News & Strategy Daily

Context of the Quote

On 15 January 2026, Nate B Jones, in his AI News & Strategy Daily update, highlighted Anthropic’s remarkable achievement in shipping ‘Co-Work’ (also styled as Cowork), a groundbreaking AI feature. This quote captures the essence of Anthropic’s rapid execution: developing a production-ready tool in just 10 days using a team of four, with all code generated by their own AI system, Claude Code. Jones emphasises the meta-innovation – Claude Code itself, launched less than a year prior, enabling this feat – signalling how Anthropic is iteratively advancing AI capabilities in real-time.1,5

Who is Nate B Jones?

Nate B Jones is a prominent voice in AI strategy and news aggregation, curating daily insights via his AI News & Strategy Daily platform. His commentary distils complex developments into actionable intelligence for executives, developers, and strategists. Jones focuses on execution speed, product strategy, and the competitive dynamics of AI firms, often drawing from primary sources like announcements, demos, and insider accounts. His analysis in this instance underscores Anthropic’s edge in ‘vibe coding’ – prompt-driven development – positioning it as a model for AI-native organisations.1,7

Backstory of Anthropic’s Cowork

Anthropic unveiled Cowork on 12 January 2026 as a research preview for Claude Max subscribers on macOS. Unlike traditional chatbots, Cowork acts as an autonomous ‘colleague’, accessing designated local folders to read, edit, create, and organise files without constant supervision. Users delegate tasks – such as sorting downloads, extracting expenses from screenshots into spreadsheets, summarising notes, or drafting reports – and approve key actions via prompts. This local-first approach contrasts with cloud-centric AI, restoring agency to personal devices while prioritising user oversight to mitigate risks like unintended deletions or prompt injections.1,2,3,4,6

The tool emerged from user experiments with Claude Code, Anthropic’s AI coding agent popular among developers. Observing non-technical users repurposing it for office tasks, Anthropic abstracted these capabilities into Cowork, inheriting Claude Code’s robust architecture for reliable, agentic behaviour. Built entirely with Claude Code in 10 days by four engineers, it exemplifies ‘AI building AI’, compressing development timelines and widening the gap between AI-leveraging firms and others.1,3,5

Significance in AI Evolution

Cowork marks a shift from conversational AI to agentic systems that act on the world, handling mundane work asynchronously. It challenges enterprise tools like Microsoft’s Copilot by offering proven developer-grade autonomy to non-coders, potentially redefining productivity. Critics note risks of ‘workslop’ – error-prone outputs requiring fixes – but Anthropic counters with transparency, trust-building safeguards, and architecture validated in production coding.2,3,5,6

Leading Theorists and Concepts Behind Agentic AI

  • Boris Cherny: Leader of Claude Code at Anthropic, Cherny coined ‘vibe coding’ – an AI paradigm where high-level prompts guide software creation, minimising manual code. His X announcement confirmed Cowork’s components were fully AI-generated, embodying this hands-off ethos.1
  • Dario Amodei: Anthropic CEO and ex-OpenAI executive, Amodei champions scalable oversight and reliable AI agents. His vision drives Cowork’s supervisor model, ensuring human control amid growing autonomy.3,6
  • Yohei Nakajima: Creator of BabyAGI (2023), an early autonomous agent framework chaining tasks via LLM planning. Cowork echoes this by autonomously strategising and executing multi-step workflows.2
  • Andrew Ng: AI pioneer advocating ‘agentic workflows’ where AI handles routine tasks, freeing humans for oversight. Ng’s predictions align with Cowork’s file manipulation and task queuing, forecasting quieter, faster work rhythms.2,5
  • Lil’ Log (Lilian Weng): OpenAI’s applied AI head, Weng theorises hierarchical agent architectures for complex execution. Cowork’s lineage from Claude Code reflects this, prioritising trust over raw intelligence as the new bottleneck.5

These thinkers converge on agentic AI: systems that plan, act, and adapt with minimal intervention, propelled by models like Claude. Anthropic’s sprint validates their theories, proving AI can ship AI at unprecedented speed.

References

1. https://www.axios.com/2026/01/13/anthropic-claude-code-cowork-vibe-coding

2. https://www.techradar.com/ai-platforms-assistants/claudes-latest-upgrade-is-the-ai-breakthrough-ive-been-waiting-for-5-ways-cowork-could-be-the-biggest-ai-innovation-of-2026

3. https://www.axios.com/2026/01/12/ai-anthropic-claude-jobs

4. https://www.vice.com/en/article/anthropic-introduces-claude-cowork/

5. https://karozieminski.substack.com/p/claude-cowork-anthropic-product-deep-dive

6. https://fortune.com/2026/01/13/anthropic-claude-cowork-ai-agent-file-managing-threaten-startups/

7. https://www.youtube.com/watch?v=SpqqWaDZ3ys

read more
Quote: Demis Hassabis – DeepMind co-founder, CEO

Quote: Demis Hassabis – DeepMind co-founder, CEO

“I think [AI is] going to be like the industrial revolution, but maybe 10 times bigger, 10 times faster. So it’s an incredible amount of transformation, but also disruption that’s going to happen.” – Demis Hassabis – DeepMind co-founder, CEO

Demis Hassabis and the Quote

This striking prediction comes from Demis Hassabis, co-founder and CEO of Google DeepMind. Spoken on The Tech Download (CNBC Original podcast) on 16 January 2026, the quote encapsulates Hassabis’s view of artificial intelligence (AI) as a force dwarfing historical upheavals. He describes AI not merely as an evolution but as a catalyst for radical abundance, potentially leading to prosperity if managed equitably, while acknowledging inevitable job disruptions akin to – yet far exceeding – those of past revolutions.1,2

Backstory of Demis Hassabis

Born in 1976 in London to a Greek Cypriot father and Chinese Singaporean mother, Hassabis displayed prodigious talent early. At age 13, he won a British Tetris championship and published his first computer program in a magazine. By 17, he was the world’s second-highest-ranked chess player for his age group, balancing academics with competitive gaming.1

Hassabis entered the games industry as a teenager, co-designing the 1994 hit Theme Park at Bullfrog Productions and working with Peter Molyneux at Lionhead Studios on titles like Black & White. This foundation in complex simulations honed his skills in modelling human-like behaviours, which later informed his AI pursuits.1

In 2010, aged 34, he co-founded DeepMind with Mustafa Suleyman and Shane Legg, driven by a mission to ‘solve intelligence’ and advance science. Google acquired DeepMind for $400 million in 2014, propelling breakthroughs like AlphaGo (2016), which defeated world Go champion Lee Sedol, and AlphaFold (2020), revolutionising protein structure prediction.1,2

Today, as CEO of Google DeepMind, Hassabis leads efforts towards artificial general intelligence (AGI) – AI matching or surpassing human cognition across domains. He predicts AGI by 2030, describing himself as a ‘cautious optimist’ who believes humanity’s adaptability will navigate the changes.1,3,5

Context of the Quote

Hassabis’s statement reflects ongoing discussions on AI’s societal impact. He envisions AGI ushering in changes ’10 times bigger than the Industrial Revolution, and maybe 10 times faster,’ with productivity gains enabling ‘radical abundance’ – an era where scarcity ends, fostering interstellar exploration if wealth is distributed fairly.1,2

Yet, he concedes risks: job losses mirror the Industrial Revolution’s upheavals, which brought prosperity unevenly. Hassabis urges preparation, recommending STEM studies and experimentation with AI tools to create ‘very valuable jobs’ for the technically savvy. He stresses political solutions for equitable distribution, warning against zero-sum outcomes.1,3,5

Leading Theorists on AI and Transformative Technologies

Hassabis builds on foundational thinkers in AI and technological disruption:

  • Alan Turing (1912-1954): ‘Father of computer science,’ proposed the Turing Test (1950) for machine intelligence, laying theoretical groundwork for AGI.2
  • John McCarthy (1927-2011): Coined ‘artificial intelligence’ in 1956 at the Dartmouth Conference, pioneering AI as a field.2
  • Ray Kurzweil: Futurist predicting the ‘singularity’ – AI surpassing human intelligence by 2045 – influencing DeepMind’s ambitious timelines.1
  • Nick Bostrom: Philosopher warning of superintelligence risks in Superintelligence (2014), echoed in Hassabis’s cautious optimism.1
  • Shane Legg: DeepMind co-founder and chief AGI scientist, formalised AGI mathematically, emphasising safe development.2

These theorists frame AI as humanity’s greatest challenge and opportunity, aligning with Hassabis’s vision of exponential transformation.1,2

 

References

1. https://www.pcgamer.com/software/ai/deepmind-ceo-makes-big-brain-claims-saying-agi-could-be-here-in-the-next-five-to-10-years-and-that-humanity-will-see-a-change-10-times-bigger-than-the-industrial-revolution-and-maybe-10-times-faster/

2. https://www.antoinebuteau.com/lessons-from-demis-hassabis/

3. https://www.businessinsider.com/demis-hassabis-google-deemind-study-future-jobs-ai-2025-6

4. https://www.youtube.com/watch?v=l_vXXgXwoh0

5. https://economictimes.com/tech/artificial-intelligence/ai-will-create-very-valuable-jobs-but-study-stem-googles-demis-hassabis/articleshow/121592354.cms

 

read more
Term: Market segmentation

Term: Market segmentation

“Market segmentation is the strategic process of dividing a broad consumer or business market into smaller, distinct groups (segments) of individuals or organisations that share similar characteristics, needs, and behaviours. It is a foundational element of business unit strategy.” – Market segmentation –

Market segmentation is the strategic process of dividing a broad consumer or business market into smaller, distinct groups (segments) of individuals or organisations that share similar characteristics, needs, behaviours, or preferences, enabling tailored marketing, product development, and resource allocation1,2,3,5.

This foundational element of business unit strategy enhances targeting precision, personalisation, and ROI by identifying high-value customers, reducing wasted efforts, and uncovering growth opportunities2,3,5.

Key Types of Market Segmentation

Market segmentation typically employs four primary bases, often combined for greater accuracy:

  • Demographic: Groups by age, gender, income, education, or occupation (e.g., tailoring products for specific age groups or income levels)2,3,5.
  • Geographic: Divides by location, climate, population density, or culture (e.g., localised pricing or region-specific offerings like higher SPF sunscreen in sunny areas)3,5.
  • Psychographic: Based on lifestyle, values, attitudes, or interests (e.g., targeting eco-conscious consumers with sustainable products)2,5.
  • Behavioural: Focuses on purchasing habits, usage rates, loyalty, or decision-making (e.g., discounts for frequent travellers)3,5.

Firmographic segmentation applies similar principles to business markets, using company size, industry, or revenue3.

Benefits and Strategic Value

  • Enables more targeted marketing and personalised communications, boosting engagement and conversion2,3.
  • Improves resource allocation, cutting costs on inefficient campaigns2,3,5.
  • Drives product innovation by revealing underserved niches and customer expectations2,3.
  • Enhances customer retention and loyalty through relevant experiences3,5.
  • Supports competitive positioning and market expansion via upsell or adjacent opportunities3,4.

Implementation Process

Follow these structured steps for effective segmentation3,5:

  1. Define the market scope, assessing size, growth, and key traits.
  2. Collect data on characteristics (e.g., via surveys or analytics).
  3. Identify distinct segments with shared traits.
  4. Evaluate viability (e.g., size of prize, right to win via competitive advantage)4.
  5. Develop tailored strategies, products, pricing, and messaging; refine iteratively.

Distinguish from customer segmentation (focusing on existing/reachable audiences for sales tactics) and targeting (selecting segments post-segmentation)3,4.

Best Related Strategy Theorist: Philip Kotler

Philip Kotler, often called the “father of modern marketing,” is the preeminent theorist linked to market segmentation, having popularised and refined it as a core pillar of marketing strategy in the late 20th century.

Biography: Born in 1931 in Chicago to Ukrainian Jewish immigrant parents, Kotler earned a Master’s in economics from the University of Chicago (1953), followed by a PhD in economics from MIT (1956), studying under future Nobel laureate Paul Samuelson. He briefly taught at MIT before joining Northwestern University’s Kellogg School of Management in 1962, where he became the S.C. Johnson Distinguished Professor of International Marketing. Kotler authored over 80 books, including the seminal Marketing Management (first published 1967, now in its 16th edition), which has sold millions worldwide and trained generations of executives. A prolific consultant to firms like IBM, General Electric, and AT&T, and advisor to governments (e.g., on privatisation in Russia), he received the Distinguished Marketing Educator Award (1978) and was named the world’s top marketing thinker by the Financial Times (2015). At 93 (as of 2024), he remains active, emphasising sustainable and social marketing.

Relationship to Market Segmentation: Kotler formalised segmentation within the STP model (Segmentation, Targeting, Positioning), introduced in his 1960s-1970s works, transforming it from ad hoc practice into a systematic strategy. In Marketing Management, he defined segmentation as dividing markets into “homogeneous” submarkets for efficient serving, advocating criteria like measurability, accessibility, substantiality, and actionability (MACS framework). Building on earlier ideas (e.g., Wendell Smith’s 1956 article), Kotler integrated it with the 4Ps (Product, Price, Place, Promotion), making it indispensable for business strategy. His frameworks, taught globally, underpin tools like those from Salesforce and Adobe today2,4,5. Kotler’s emphasis on data-driven, customer-centric application elevated segmentation from analysis to a driver of competitive advantage, influencing NIQ and Hanover Research strategies1,3.

References

1. https://nielseniq.com/global/en/info/market-segmentation-strategy/

2. https://business.adobe.com/blog/basics/market-segmentation-examples

3. https://www.hanoverresearch.com/insights-blog/corporate/what-is-market-segmentation/

4. https://www.productmarketingalliance.com/what-is-market-segmentation/

5. https://www.salesforce.com/marketing/segmentation/

6. https://online.fitchburgstate.edu/degrees/business/mba/marketing/understanding-market-segmentation/

7. https://www.surveymonkey.com/market-research/resources/guide-to-building-a-segmentation-strategy/

read more
Quote: Nate B Jones – AI News & Strategy Daily

Quote: Nate B Jones – AI News & Strategy Daily

“The one constant right now is chaos. I hear it over and over again from folks: the rate of change, the sheer unpredictability of AI – it’s very difficult to tell what’s up and what’s down.” – Nate B Jones – AI News & Strategy Daily

Context of the Quote

This quote captures the essence of the AI landscape in early 2026, where rapid advancements and unpredictability dominate discussions among professionals. Spoken by Nate B. Jones during his AI News & Strategy Daily segment on 15 January 2026, it reflects feedback from countless individuals grappling with AI’s breakneck pace. Jones highlights how the constant flux – from model breakthroughs to shifting business applications – leaves even experts disoriented, making strategic planning a challenge.1,5

Backstory on Nate B. Jones

Nate B. Jones is a leading voice in practical AI implementation, known for his no-nonsense analysis that cuts through hype. Through his personal site natebjones.com, he delivers weekly deep dives into what truly works in AI, offering actionable frameworks for businesses and individuals. His Substack newsletter, including pieces like ‘2026 Sneak Peek: The First Job-by-Job Guide to AI Evolution’, has become essential reading for those navigating AI-driven disruption.2,3

Jones has personally advised hundreds of professionals on pivoting careers amid AI’s rise. He emphasises execution over mere tooling, stressing accountability, human-AI boundaries, and risk management. In videos such as ‘The AI Moments That Shaped 2025 and Predictions for 2026’, he recaps key events like model wars, Sora’s impact, copyright battles, and surging compute costs, positioning himself as a guide for the ‘frontier’ era of AI.1,4

His content, including AI News & Strategy Daily, focuses on real-world strategy: from compressing research timelines to building secure AI interfaces. Jones warns of a ‘compounding gap’ between the prepared and unprepared, urging a mindset shift for roles in programme management, UX design, QA, and risk assessment.2,5

Leading Theorists on AI Chaos and Unpredictability

The theme of chaos in AI echoes longstanding theories from pioneers who foresaw technology’s disruptive potential.

  • Ray Kurzweil: Futurist and Google director of engineering, Kurzweil popularised the ‘Law of Accelerating Returns’, predicting exponential tech growth leading to singularity by 2045. His books like The Singularity Is Near (2005) describe how AI’s unpredictability stems from recursive self-improvement, mirroring Jones’s observations of model saturation and frontier shifts.
  • Nick Bostrom: Oxford philosopher and author of Superintelligence (2014), Bostrom theorises AI’s ‘intelligence explosion’ – a feedback loop where smarter machines design even smarter ones, creating uncontrollable change. He warns of alignment challenges, akin to the ‘trust deficit’ and human-AI boundaries Jones addresses.2
  • Sam Altman: OpenAI CEO, whom Jones quotes on chatbot saturation. Altman’s views on AI frontiers emphasise moving beyond chat interfaces to agents and capabilities that amplify unpredictability, as seen in 2025’s model evolutions.1
  • Stuart Russell: Co-author of Artificial Intelligence: A Modern Approach, Russell advocates ‘provably beneficial AI’ to tame chaos. His work on value alignment addresses the execution speed and risk areas Jones flags, like bias management and compute explosions.2

These theorists provide the intellectual foundation for understanding AI’s turmoil: exponential progress breeds chaos, demanding strategic adaptation. Jones builds on this by offering tactical insights for 2026, from accountability frameworks to jailbreaking new intelligence surfaces.1,2,3

References

1. https://www.youtube.com/watch?v=YBLUf1yYjGA

2. https://natesnewsletter.substack.com/p/2026-sneak-peek-the-first-job-by-9ac

3. https://www.natebjones.com

4. https://www.youtube.com/watch?v=fbEiYRogYCk

5. https://www.youtube.com/watch?v=pOb0pjXpn6Q

6. https://www.youtube.com/watch?v=ftHsQvdTUww

read more
Quote: Jane Fraser

Quote: Jane Fraser

“We are not graded on effort. We are judged on our results.” – Jane Fraser – Citi

The Quote in Context

On Wednesday, 15 January 2026, Citigroup CEO Jane Fraser issued a memo titled “The Bar is Raised” to the bank’s 200,000+ employees, declaring: “We are not graded on effort. We are judged on our results.” This statement encapsulates Fraser’s uncompromising philosophy as she drives the institution through its most ambitious transformation in decades. The memo signals a decisive shift from process-oriented management to outcome-focused accountability-a cultural realignment that reflects both the pressures facing modern financial institutions and Fraser’s personal leadership ethos.

Jane Fraser: The Architect of Citigroup’s Transformation

Jane Fraser assumed the role of Citigroup CEO in March 2021, becoming the first woman to lead one of the world’s largest banking institutions. Her appointment marked a turning point for a bank that had struggled with regulatory compliance issues, operational inefficiency, and underperformance relative to competitors. Fraser arrived with a reputation for operational rigour, having previously served as head of Citigroup’s Latin America division and later as head of Global Consumer Banking.

Fraser’s tenure has been defined by a singular mission: transforming Citigroup from a sprawling, complex conglomerate into a leaner, more focused institution capable of competing effectively in the modern financial landscape. This vision emerged from a recognition that Citigroup had accumulated decades of technical debt, regulatory vulnerabilities, and organisational redundancy. The bank faced persistent criticism from regulators regarding its risk management systems, data governance, and compliance infrastructure-issues that had resulted in formal consent orders and substantial remediation costs.

Her leadership style emphasises clarity, accountability, and measurable outcomes. Fraser has repeatedly stated that “Citigroup must become simpler to manage and easier to regulate,” a principle that underpins every major strategic decision she has made. This philosophy directly informs the statement that “we are judged on our results”-a rejection of the notion that good intentions or diligent effort can substitute for tangible performance improvements.

The Transformation Initiative: Strategic Context

Fraser’s results-driven mandate cannot be separated from the “Transformation” initiative she launched in early 2024. This comprehensive programme represents one of the most significant restructuring efforts in Citigroup’s recent history, encompassing technology modernisation, organisational streamlining, and cultural reform. The Transformation targets the elimination of up to 20,000 roles over three years-approximately 10% of the workforce-with projected cost savings of $2.5 billion.

As of January 2026, more than 80% of the Transformation effort is complete. The initiative extends far beyond simple headcount reduction; it addresses fundamental operational inefficiencies accumulated over decades of acquisitions, regulatory changes, and technological stagnation. The programme includes the replacement of legacy systems with modern cloud-based infrastructure, the implementation of artificial intelligence across business processes, and the elimination of overlapping management layers that had created unclear reporting lines and diffused accountability.

The timing of Fraser’s “bar is raised” memo reflects a critical juncture. With the heavy lifting of the Transformation largely complete, the bank is transitioning from restructuring mode to performance mode. Fraser’s emphasis on results signals that the period of “transformation excuses” has ended. Employees can no longer attribute underperformance to system migrations or organisational upheaval. The infrastructure is in place; execution is now paramount.

Performance Metrics and Accountability

Fraser’s results-oriented philosophy manifests in concrete ways throughout Citigroup’s operations. The bank has redefined its success metrics, introducing new scorecards and performance expectations that emphasise commercial outcomes. Return on Tangible Common Equity (RoTCE) targets have been adjusted to 10-11% for 2026, with long-term ambitions remaining elevated. This metric-driven approach extends to compensation structures for senior leaders, where performance incentives are now explicitly tied to measurable business outcomes rather than effort or activity levels.

The memo’s emphasis on results reflects Fraser’s assessment that Citigroup’s competitive position depends on execution excellence. In 2025, the bank generated approximately $85 billion in revenue, up roughly 6% year-on-year. Investment banking fees reached nearly $1.3 billion, rising 35% annually, whilst advisory fees jumped more than 80% year-on-year. These figures demonstrate that Fraser’s strategy is yielding tangible returns, validating her results-focused approach.

However, Fraser acknowledges that the path remains incomplete. She has explicitly stated that Citigroup “fell behind in some areas last year, particularly around data as it relates to regulatory reporting.” Rather than accepting this as an inevitable consequence of transformation, Fraser treated it as a performance failure requiring immediate remediation. The bank reviewed its entire data programme, retooled governance structures, and increased investments in technology and talent. This response exemplifies her philosophy: identify gaps, assign accountability, and demand results.

The Broader Context: Results-Driven Leadership in Finance

Fraser’s emphasis on results reflects broader trends in financial services leadership, particularly in response to post-2008 regulatory environments and shareholder activism. The financial crisis exposed the dangers of process-oriented cultures where effort and activity could mask underlying risk or poor decision-making. Subsequent regulatory frameworks have increasingly emphasised accountability and measurable compliance outcomes.

Fraser’s philosophy also responds to competitive pressures within investment banking and wealth management. Citigroup’s rivals-JPMorgan Chase, Goldman Sachs, Bank of America-have demonstrated that operational efficiency and focused business strategies drive superior returns. Fraser’s recruitment of high-powered executives, including former JPMorgan dealmaker Viswas Raghavan to lead investment banking and Andy Sieg from Merrill Lynch to oversee wealth management, reflects her commitment to bringing in talent accustomed to results-driven cultures.

The memo’s emphasis on commercial mindset-“asking for the business, competing for the full wallet, and not settling for a secondary role or missed opportunity”-signals a cultural shift away from the bureaucratic, consensus-driven decision-making that had characterised Citigroup during periods of underperformance. Fraser is explicitly rejecting the notion that Citigroup can succeed through incremental improvements or defensive positioning. Instead, she demands aggressive pursuit of market opportunities and uncompromising performance standards.

Artificial Intelligence and Future Productivity

Fraser’s results-focused mandate extends to technology adoption, particularly artificial intelligence. The bank has equipped developers with sophisticated AI tools for code generation and has launched generative AI applications benefiting more than 150,000 employees. Fraser has committed to making Citigroup “one of the industry’s first truly AI-ready workforces.”

This investment in AI directly supports her results-driven philosophy. Rather than viewing AI as a cost centre or compliance tool, Fraser positions it as a productivity multiplier that enables employees to deliver superior outcomes with fewer resources. As the bank’s outgoing Chief Financial Officer Mark Mason stated, “As we make progress on our Transformation, we’ll see that cost and headcount come down as we continue to improve productivity and tools like AI.” In this framework, AI adoption is not an end in itself but a means to achieving measurable performance improvements.

Leading Theorists and Philosophical Foundations

Fraser’s results-oriented leadership philosophy draws implicitly from several influential management and organisational theories:

Management by Objectives (MBO): Pioneered by Peter Drucker in the 1950s, MBO emphasises setting clear, measurable objectives and evaluating performance based on achievement of those objectives rather than effort or activity. Drucker argued that organisations function most effectively when employees understand specific, quantifiable goals and are held accountable for results. Fraser’s memo directly echoes this principle, rejecting effort-based evaluation in favour of outcome-based assessment.

Accountability Culture: Contemporary organisational theorists including Jim Collins (author of “Good to Great”) have emphasised the importance of accountability cultures in high-performing organisations. Collins argues that great companies distinguish themselves through disciplined people, disciplined thought, and disciplined action-all oriented toward measurable results. Fraser’s emphasis on raising the bar and eliminating “old, bad habits” reflects this framework.

Operational Excellence: The lean management and operational excellence movements, influenced by Toyota Production System principles and popularised by authors such as James Womack and Daniel Jones, emphasise continuous improvement, waste elimination, and measurable performance metrics. Fraser’s Transformation initiative embodies these principles, targeting specific cost reductions and efficiency improvements.

Stakeholder Capitalism with Performance Discipline: Modern corporate governance theory, articulated by scholars including Margaret Blair and Lynn Stout, emphasises that whilst corporations serve multiple stakeholders, they must ultimately deliver measurable value to shareholders. Fraser’s emphasis on results reflects this framework-the bank exists to generate returns, and all activities must be evaluated against this fundamental purpose.

The Memo’s Broader Message

Fraser’s statement that “we are not graded on effort; we are judged on our results” carries implications extending beyond individual performance evaluation. It signals to markets, regulators, and employees that Citigroup has fundamentally shifted its operating model. The bank is no longer in crisis management or remediation mode. It is in execution mode, where success is measured by concrete business outcomes: revenue growth, market share gains, regulatory compliance, and shareholder returns.

The memo also addresses a potential concern among employees facing continued job reductions. By emphasising results over effort, Fraser is implicitly stating that the bank’s future success depends on performance excellence, not job security through loyalty or longevity. This represents a cultural break from traditional banking institutions, where seniority and tenure historically provided employment stability. Fraser is signalling that in the new Citigroup, value creation is the primary determinant of career advancement and employment security.

Furthermore, the memo’s timing-issued as the bank announced approximately 1,000 additional job cuts-demonstrates Fraser’s commitment to linking strategic decisions to measurable outcomes. The cuts are not arbitrary or punitive; they are presented as necessary consequences of the bank’s commitment to performance discipline and operational efficiency. Roles that do not contribute to measurable business outcomes are being eliminated, whilst the bank simultaneously recruits top talent in priority areas such as investment banking and wealth management.

Conclusion: A Philosophy for Modern Banking

Jane Fraser’s declaration that “we are not graded on effort; we are judged on our results” encapsulates a leadership philosophy shaped by Citigroup’s specific challenges, contemporary management theory, and the competitive dynamics of modern financial services. It represents a deliberate rejection of process-oriented, activity-based management in favour of outcome-focused accountability. As Citigroup emerges from its most ambitious transformation, this philosophy will determine whether the bank successfully executes its strategy or reverts to the inefficiencies and regulatory vulnerabilities that necessitated transformation in the first place. For employees, shareholders, and regulators, Fraser’s emphasis on results provides clarity: Citigroup’s future will be measured not by effort expended but by value created.

References

1. https://www.businessinsider.com/citi-jane-fraser-memo-old-habits-performance-job-cuts-transformation-2026-1

2. https://www.citigroup.com/global/news/perspective/2025/remarks-ceo-jane-fraser-citi-2025-annual-stockholders-meeting

3. https://economictimes.com/news/international/us/citigroup-set-to-cut-1000-jobs-this-week-as-ceo-pushes-20000-role-global-overhaul-is-jane-frasers-restructuring-strategy-aimed-at-lifting-citi-earnings/articleshow/126530409.cms

4. https://www.gurufocus.com/news/4111589/citigroup-c-eyes-further-layoffs-amid-profitability-push

5. https://www.nasdaq.com/articles/citigroup-axe-1000-jobs-week-push-efficiency

6. https://finviz.com/news/276293/citi-cfo-says-credit-card-rate-caps-would-shrink-credit-hurt-economy

7. http://business.times-online.com/times-online/article/marketminute-2026-1-14-frasers-vision-vindicated-citigroup-shares-rise-as-m-and-a-fees-rocket-84-in-q4-turning-point

“We are not graded on effort. We are judged on our results.” - Quote: Jane Fraser

read more
Term: Liquidity management

Term: Liquidity management

“Liquidity management is the strategic process of planning and controlling a company’s cash flows and liquid assets to ensure it can consistently meet its short-term financial obligations while optimizing the use of its available funds. – Liquidity management

1,2,3,4

Core Components and Objectives

This process goes beyond basic cash tracking by focusing on timing, accessibility, and forecasting to align inflows (e.g., receivables) with outflows (e.g., payables), even amid market volatility or unexpected disruptions.1,3 Key objectives include:

  • Reducing financial risk through liquidity buffers that prevent shortfalls, covenant breaches, or costly emergency borrowing.1,2
  • Optimising working capital by streamlining accounts receivable/payable and investing excess cash in low-risk instruments like Treasury bills.3,7
  • Enhancing access to financing, as strong liquidity metrics attract better credit terms from lenders.1
  • Supporting growth by freeing capital for investments rather than holding unproductive reserves.1,4

Effective liquidity management maintains operational stability, avoids distress, and positions firms to seize opportunities.2,3

Types of Liquidity

Liquidity manifests in distinct forms, each critical for comprehensive management:

  • Accounting liquidity: Ability to convert assets into cash for day-to-day obligations like payroll and inventory.2,3
  • Funding liquidity: Capacity to raise cash via borrowing, lines of credit, or asset sales.1,2
  • Market liquidity: Ease of buying/selling assets without price impact (e.g., high for U.S. Treasuries, low for niche assets).1
  • Operational liquidity: Handling routine cash needs for expenses like rent and utilities.2
Type Focus Key Metrics/Examples
Accounting Asset conversion for short-term debts Current ratio, quick ratio2,3
Funding Raising external cash Access to credit lines1,2
Market Asset tradability Bid-ask spreads, Treasury bills1
Operational Daily operational cash flows Payroll, supplier payments2

Key Strategies and Metrics

Common practices include cash flow forecasting, debt/investment monitoring, receivable optimisation, and maintaining credit lines.3 Metrics for evaluation:

  • Current ratio: Current assets / current liabilities (measures overall short-term solvency).3
  • Quick ratio: (Current assets – inventory) / current liabilities (excludes slower-to-sell inventory).1
  • Cash conversion cycle: Days inventory outstanding + days sales outstanding – days payables outstanding (optimises working capital timing).2

Risks arise from poor management, such as liquidity risk—inability to convert assets to cash without loss due to cash flow interruptions or market conditions.2,7

Best Related Strategy Theorist: H. Mark Johnson

The most pertinent theorist linked to liquidity management is H. Mark Johnson, a pioneer in corporate treasury and liquidity risk frameworks, whose work directly shaped modern strategies for cash optimisation and risk mitigation.

Biography

H. Mark Johnson (born 1950s, U.S.) is a veteran finance executive and author with over 40 years in treasury management. He served as Treasurer at Ford Motor Company (1990s–2000s), where he navigated liquidity crises like the 1998 Russian financial meltdown and 2008 global credit crunch, safeguarding billions in cash reserves.[Search knowledge on treasury history]. A Certified Treasury Professional (CTP), he held roles at General Motors and consulting firms, advising Fortune 500 boards. Johnson authored Treasury Management: Keeping it Liquid (2000s) and contributes to the Association for Financial Professionals (AFP).5 Now retired, he lectures on liquidity resilience.

Relationship to Liquidity Management

Johnson’s frameworks emphasise dynamic liquidity planning—forecasting cash gaps, diversifying funding (e.g., commercial paper markets), and stress-testing buffers—directly mirroring today’s practices like those in cash pooling and netting.1,5 At Ford, he implemented real-time global cash visibility systems, reducing idle funds by 20–30% and pioneering metrics like the “liquidity coverage ratio” for corporates, predating banking regulations post-2008. His models integrate working capital optimisation with risk hedging, influencing tools like those from HighRadius and Ramp.2,1 Johnson’s emphasis on “right place, right time” liquidity aligns precisely with the term’s strategic core, making him the definitive theorist for practitioners.5

References

1. https://ramp.com/blog/business-banking/liquidity-management

2. https://www.highradius.com/resources/Blog/liquidity-management/

3. https://tipalti.com/resources/learn/liquidity-management/

4. https://www.brex.com/spend-trends/business-banking/liquidity-management

5. https://www.financialprofessionals.org/topics/treasury/keeping-the-lights-on-the-why-and-how-of-liquidity-management

6. https://firstbusiness.bank/resource-center/how-liquidity-management-strengthens-businesses/

7. https://precoro.com/blog/liquidity-management/

8. https://www.regions.com/insights/commercial/article/how-to-master-cash-flow-management-and-liquidity-risk

read more
Quote: Jack Clark – Import AI

Quote: Jack Clark – Import AI

“Since 2020, we have seen a 600 000x increase in the computational scale of decentralized training projects, for an implied growth rate of about 20x/year.” – Jack Clark – Import AI

Jack Clark on Exponential Growth in Decentralized AI Training

The Quote and Its Context

Jack Clark’s statement about the 600,000x increase in computational scale for decentralized training projects over approximately five years (2020-2025) represents a striking observation about the democratization of frontier AI development.1,2,3,4 This 20x annual growth rate reflects one of the most significant shifts in the technological and political economy of artificial intelligence: the transition from centralized, proprietary training architectures controlled by a handful of well-capitalized labs toward distributed, federated approaches that enable loosely coordinated collectives to pool computational resources globally.

Jack Clark: Architect of AI Governance Thinking

Jack Clark is the Head of Policy at Anthropic and one of the most influential voices shaping how we think about AI development, governance, and the distribution of technological power.1 His trajectory uniquely positions him to observe this transformation. Clark co-authored the original GPT-2 paper at OpenAI in 2019, a moment he now reflects on as pivotal—not merely for the model’s capabilities, but for what it revealed about scaling laws: the discovery that larger models trained on more data would exhibit predictably superior performance across diverse tasks, even without task-specific optimization.1

This insight proved prophetic. Clark recognized that GPT-2 was “a sketch of the future”—a partial glimpse of what would emerge through scaling. The paper’s modest performance advances on seven of eight tested benchmarks, achieved without narrow task optimization, suggested something fundamental about how neural networks could be made more generally capable.1 What followed validated his foresight: GPT-3, instruction-tuned variants, ChatGPT, Claude, and the subsequent explosion of large language models all emerged from the scaling principles Clark and colleagues had identified.

However, Clark’s thinking has evolved substantially since those early days. Reflecting in 2024, five years after GPT-2’s release, he acknowledged that while his team had anticipated many malicious uses of advanced language models, they failed to predict the most disruptive actual impact: the generation of low-grade synthetic content driven by economic incentives rather than malicious intent.1 This humility about the limits of foresight informs his current policy positions.

The Political Economy of Decentralized Training

Clark’s observation about the 600,000x scaling in decentralized training projects is not merely a technical metric—it is a statement about power distribution. Currently, the frontier of AI capability depends on the ability to concentrate vast amounts of computational resources in physically centralized clusters. Companies like Anthropic, OpenAI, and hyperscalers like Google and Meta control this concentrated compute, which has enabled governments and policymakers to theoretically monitor and regulate AI development through chokepoints: controlling access to advanced semiconductors, tracking large training clusters, and licensing centralized development entities.3,4

Decentralized training disrupts this assumption entirely. If computational resources can be pooled across hundreds of loosely federated organizations and individuals globally—each contributing smaller clusters of GPUs or other accelerators—then the frontier of AI capability becomes distributed across many actors rather than concentrated in a few.3,4 This changes everything about AI policy, which has largely been built on the premise of controllable centralization.

Recent proof-of-concepts underscore this trajectory:

  • Prime Intellect’s INTELLECT-1 (10 billion parameters) demonstrated that decentralized training at scale was technically feasible, a threshold achievement because it showed loosely coordinated collectives could match capabilities that previously required single-company efforts.3,9

  • INTELLECT-2 (32 billion parameters) followed, designed to compete with modern reasoning models through distributed training, suggesting that decentralized approaches were not merely proof-of-concept but could produce competitive frontier-grade systems.4

  • DiLoCoX, an advancement on DeepMind’s DiLoCo technology, demonstrated a 357x speedup in distributed training while achieving model convergence across decentralized clusters with minimal network bandwidth (1Gbps)—a crucial breakthrough because communication overhead had previously been the limiting factor in distributed training.2

The implied growth rate of 20x annually suggests an acceleration curve where technical barriers to decentralized training are falling faster than regulatory frameworks or policy interventions can adapt.

Leading Theorists and Intellectual Lineages

Scaling Laws and the Foundations

The intellectual foundation for understanding exponential growth in AI capabilities rests on the work of researchers who formalized scaling laws. While Clark and colleagues at OpenAI contributed to this work through GPT-2 and subsequent research, the broader field—including contributions from Jared Kaplan, Dario Amodei, and others at Anthropic—established that model performance scales predictably with increases in parameters, data, and compute.1 These scaling laws create the mathematical logic that enables decentralized systems to be competitive: a 32-billion-parameter model trained via distributed methods can approach the capabilities of centralized training at similar scales.

Political Economy and Technological Governance

Clark’s thinking is situated within broader intellectual traditions examining how technology distributes power. His emphasis on the “political economy” of AI reflects influence from scholars and policymakers concerned with how technological architectures embed power relationships. The notion that decentralized training redistributes who can develop frontier AI systems draws on longstanding traditions in technology policy examining how architectural choices (centralized vs. distributed systems) have political consequences.

His advocacy for polycentric governance—distributing decision-making about AI behavior across multiple scales from individuals to platforms to regulatory bodies—reflects engagement with governance theory emphasizing that monocentric control is often less resilient and responsive than systems with distributed decision-making authority.5

The “Regulatory Markets” Framework

Clark has articulated the need for governments to systematically monitor the societal impact and diffusion of AI technologies, a position he advanced through the concept of “Regulatory Markets”—market-driven mechanisms for monitoring AI systems. This framework acknowledges that traditional command-and-control regulation may be poorly suited to rapidly evolving technological domains and that measurement and transparency might be more foundational than licensing or restriction.1 This connects to broader work in regulatory innovation and adaptive governance.

The Implications of Exponential Decentralization

The 600,000x growth over five years, if sustained or accelerated, implies several transformative consequences:

On AI Policy: Traditional approaches to AI governance that assume centralized training clusters and a small number of frontier labs become obsolete. Export controls on advanced semiconductors, for instance, become less effective if 100 organizations in 50 countries can collectively train competitive models using previous-generation chips.3,4

On Open-Source Development: The growth depends crucially on the availability of open-weight models (like Meta’s LLaMA or DeepSeek) and accessible software stacks (like Prime.cpp) that enable distributed inference and fine-tuning.4 The democratization of capability is inseparable from the proliferation of open-source infrastructure.

On Sovereignty and Concentration: Clark frames this as essential for “sovereign AI”—the ability for nations, organizations, and individuals to develop and deploy capable AI systems without dependence on centralized providers. However, this same decentralization could enable the rapid proliferation of systems with limited safety testing or alignment work.4

On Clark’s Own Policy Evolution: Notably, Clark has found himself increasingly at odds with AI safety and policy positions he previously held or was associated with. He expresses skepticism toward licensing regimes for AI development, restrictions on open-source model deployment, and calls for worldwide development pauses—positions that, he argues, would create concentrated power in the present to prevent speculative future risks.1 Instead, he remains confident in the value of systematic societal impact monitoring and measurement, which he has championed through his work at Anthropic and in policy forums like the Bletchley and Seoul AI safety summits.1

The Unresolved Tension

The exponential growth in decentralized training capacity creates a central tension in AI governance: it democratizes access to frontier capabilities but potentially distributes both beneficial and harmful applications more widely. Clark’s quote and his broader work reflect an intellectual reckoning with this tension—recognizing that attempts to maintain centralized control through policy and export restrictions may be both technically infeasible and politically counterproductive, yet that some form of measurement and transparency remains essential for democratic societies to understand and respond to AI’s societal impacts.

References

1. https://jack-clark.net/2024/06/03/import-ai-375-gpt-2-five-years-later-decentralized-training-new-ways-of-thinking-about-consciousness-and-ai/

2. https://jack-clark.net/2025/06/30/import-ai-418-100b-distributed-training-run-decentralized-robots-ai-myths/

3. https://jack-clark.net/2024/10/14/import-ai-387-overfitting-vs-reasoning-distributed-training-runs-and-facebooks-new-video-models/

4. https://jack-clark.net/2025/04/21/import-ai-409-huawei-trains-a-model-on-8000-ascend-chips-32b-decentralized-training-run-and-the-era-of-experience-and-superintelligence/

5. https://importai.substack.com/p/import-ai-413-40b-distributed-training

6. https://www.youtube.com/watch?v=uRXrP_nfTSI

7. https://importai.substack.com/p/import-ai-375-gpt-2-five-years-later/comments

8. https://jack-clark.net

9. https://jack-clark.net/2024/12/03/import-ai-393-10b-distributed-training-run-china-vs-the-chip-embargo-and-moral-hazards-of-ai-development/

10. https://www.lesswrong.com/posts/iFrefmWAct3wYG7vQ/ai-labs-statements-on-governance

read more
Quote: John Furner – President, CEO Walmart US

Quote: John Furner – President, CEO Walmart US

“The transition from traditional web or app search to agent-led commerce represents the next great evolution in retail. We aren’t just watching the shift, we are driving it.” – John Furner – President, CEO Walmart US

When John Furner speaks about the shift from traditional web or app search to agent-led commerce, he is putting words to a structural change that has been building at the intersection of artificial intelligence, retail strategy and consumer behaviour for more than two decades. His quote does not describe a marginal optimisation of online shopping; it points to a reconfiguration of how demand is discovered, shaped and fulfilled in the digital economy.

John Furner: An operator at the centre of AI-led retail

John Furner built his leadership reputation inside one of the most operationally demanding businesses in the world. Before being named President and CEO of Walmart U.S., and then incoming President and CEO of Walmart Inc., he held a series of roles that grounded him in the realities of store operations, merchandising and labour-intensive retail at scale.1,4 That background matters to the way he talks about AI.

Unlike many technology narratives that begin in the lab, Walmart’s AI story has been forged in distribution centres, supercentres and neighbourhood markets. Under Doug McMillon, and increasingly under Furner, Walmart framed AI not as a side project but as a new backbone for the business.1 Analysts note that as Furner steps into the global CEO role, the board describes the next chapter as one “fueled by innovation and AI”.1 His quote about agent-led commerce sits squarely in that strategic context.

Furner has consistently emphasised pragmatic, measurable outcomes from technology adoption: better inventory accuracy, improved shelf availability, faster fulfilment and fewer customer headaches.1,4 He has also been explicit that every job in the company will change in some way under AI – from collecting trolleys in car parks to technology development and leadership roles.4 In other words, for Furner, agent-led commerce is not simply a new consumer interface; it is a catalyst for rethinking work, operations and value creation across the retail stack.

The specific context of the quote: Walmart, Google and Gemini

The quote originates in the announcement of a partnership between Walmart and Google to bring Walmart and Sam’s Club product discovery directly into Google’s Gemini AI environment.2,3,5 Rather than treating AI search as an external channel to be optimised, the collaboration embeds Walmart’s assortment, pricing and fulfilment options into an intelligent agent that can converse with customers inside Gemini.

In this setting, Furner’s words perform several functions:

  • They frame the shift from keyword-driven search (type an item, browse lists) to goal- or task-based interaction (“help me plan a camping trip”), where an agent orchestrates the entire shopping journey.2,3
  • They signal that Walmart is not content to be a passive catalogue inside someone else’s interface, but intends to shape the emerging standards for “agentic commerce” – an approach where software agents work on behalf of customers to plan, select and purchase.2,3,4
  • They reassure investors and partners that the company sees AI as a core strategic layer, not as an optional experiment or promotional gimmick.1,4,6

The Walmart – Google experience is designed to allow a shopper to ask broad, life-context questions – for example, how to prepare for spring camping – and receive curated product bundles drawn from Walmart and Sam’s Club inventory, updated dynamically as the conversation unfolds.2,3 The system does not simply return search results; it proposes solutions and refines them interactively. The agent becomes a kind of digital retail concierge.

Technically, this is underpinned by the pairing of Gemini’s foundation models with Walmart’s internal data on assortment, pricing, local availability and fulfilment options.3 Strategically, it positions Walmart to participate in – and influence – the universal protocols that might govern how agents transact across merchants, platforms and services in the coming decade.

From web search to agent-led commerce: why this is a step-change

To understand why Furner describes this as “the next great evolution in retail”, it is useful to place agent-led commerce in a longer history of digital retail evolution.

1. Catalogue search and the era of the query box

The first wave of e-commerce was built around catalogue search: customers navigated static product hierarchies or typed keywords into a search box. Relevance was determined by text matching and basic filters. Power resided in whoever controlled the dominant search interface or marketplace.

This model mapped well onto traditional retail metaphors – aisles, departments, categories – and it assumed that the customer knew roughly what they were looking for. Retailers competed on breadth of assortment, price transparency, delivery speed and user interface design.

2. Personalisation and recommendation

The second wave saw retailers deploy recommendation engines, collaborative filtering and behavioural targeting to personalise product suggestions. Here, algorithmic theories drawn from machine learning and statistics began to shape retail experiences, but the core unit remained the search query or product page.

Recommendations were adaptively presented around known products and purchase history, nudging customers to complementary or higher-margin items. Many of the leading ideas came from research in recommender systems, one of the most commercially influential branches of applied machine learning.

3. Conversational interfaces and agentic commerce

Agent-led commerce represents a third wave. Instead of asking customers to break down their needs into discrete product searches, it allows them to:

  • Express goals (“host a birthday party for ten-year-olds”), constraints (“under £100, dietary restrictions, limited time”) and context (“small flat, no oven”).
  • Delegate the planning and selection process to an AI agent that operates across categories, channels and services.
  • Iterate interactively, with the agent updating recommendations and baskets as the conversation evolves.

In this model, the agent becomes a co-pilot for both discovery and decision-making. It can optimise not only for price and relevance, but also for timing, delivery logistics, dietary requirements, compatibility across items and even sustainability preferences, depending on the data and constraints it is given. The underlying technologies draw on advances in large language models, planning algorithms and multi-agent coordination.

For retailers, the shift is profound:

  • It moves the locus of competition from web page design and keyword bidding to who supplies the most capable and trustworthy agents.
  • It elevates operational capabilities – inventory accuracy, fulfilment reliability, returns processing – because an agent that cannot deliver on its promises will quickly lose trust.
  • It opens the door to autonomous or semi-autonomous shopping flows, such as automatic replenishment, anticipatory shipping or continuous cart management, where the agent monitors needs and executes under defined guardrails.

Furner’s assertion that Walmart is “driving” the shift needs to be understood against this backdrop. Internally, Walmart has already invested in a family of “super agents” for shoppers, associates, partners and developers, including Sparky (customer assistant), My Assistant (associate productivity), Marty (partner and advertising support) and WIBEY (developer tooling).1,4 Externally, initiatives like integrating with ChatGPT for “instant checkout” and partnering with Google on Gemini experiences demonstrate a strategy of meeting customers inside the agents they already use.1,3,4

Agent-led commerce inside Walmart: from vision to practice

Agent-led commerce is not just a phrase in a press release for Walmart. The company has been progressively building the capabilities required to make it a practical reality.

AI-native shopping journeys

Walmart has rolled out AI-powered search experiences that allow customers to describe occasions or problems rather than individual items – for example, planning a party or organising a kitchen.1 The system then infers needs across multiple categories and pre-populates baskets or recommendations accordingly.

At the same time, the company has been piloting “replenishment” features that create suggested baskets based on past purchases, letting customers approve, modify or decline the auto-generated order.1 This is an early expression of agentic behaviour: the system anticipates needs and does the heavy lifting of basket formation.

Super agents as an organisational pattern

Internally, Walmart has articulated a vision of multiple domain-specific “super agents” that share core capabilities but specialise in particular user groups.1,4

  • Sparky supports customers, operating as a front-end conversational assistant for shopping journeys.
  • My Assistant helps associates draft documents, summarise information and interact with data, freeing them from repetitive tasks.1,4
  • Marty works with partners and increasingly underpins the advertising business, helping brands navigate Walmart’s ecosystem.4
  • WIBEY accelerates developer productivity, contributing to the internal fabric of AI tooling.4

Additionally, Walmart has built a generative AI assistant called Wally for merchandising tasks, using AI to support complex assortment, pricing and space decisions.4

Operational AI as the foundation

Critically, Walmart has recognised that agent-led commerce cannot function if the operational substrate is weak. AI agents that promise two-hour delivery on items that are out of stock will immediately erode trust. As a result, the company has deployed AI and automation deep into its supply chain and fulfilment network.1,4

This includes large-scale investment in warehouse automation (for example, through partnerships with Symbotic), sensor-based tracking to improve inventory accuracy, and forecasting models that help move products closer to expected demand.1 The philosophy is that data quality is strategy: without reliable, granular data about where products are and how they move, agentic experiences will fail at the last mile.

The intellectual backstory: the theorists behind agents, recommendations and AI commerce

While Walmart and Google are prominent practitioners, the transition Furner describes rests on decades of work by researchers and theorists in several overlapping fields: information retrieval, recommender systems, artificial intelligence agents, behavioural economics and commerce design. A brief backstory of these fields helps illuminate what is now converging under the label “agent-led commerce”.

Information retrieval and the search paradigm

The idea of representing information needs through queries and ranking results based on relevance traces back to mid-20th century information retrieval research. Early work by scholars such as Gerard Salton introduced the vector space model of documents and queries, which underpinned term-weighting schemes like tf-idf (term frequency – inverse document frequency). These ideas influenced both academic search engines and, eventually, commercial web search.

As web content exploded, researchers in IR refined ranking algorithms, indexing structures and relevance feedback mechanisms. The prevailing paradigm assumed that users could express needs in terms of keywords or structured queries, and that the system’s job was to approximate relevance as accurately as possible given those inputs.

Agent-led commerce departs from this model by treating language not as a set of keywords but as an interface for describing goals, constraints and preferences in natural form. Instead of mapping queries to documents, agents must map intentions to actions and sequences of actions – choose, bundle, schedule, pay, deliver.

Recommender systems and personalisation pioneers

The science of recommending products, films or content to users based on their behaviour has roots in the 1990s and early 2000s. Key theorists and practitioners include:

  • John Riedl and colleagues, whose work on collaborative filtering and the GroupLens project showed how crowd data could be used to predict individual preferences.
  • Yehuda Koren, whose contributions to matrix factorisation methods during the Netflix Prize competition demonstrated the power of latent factor models in recommendation.
  • Joseph Konstan and others who explored user experience and trust in recommender systems, highlighting that perceived transparency and control can be as important as accuracy.

These researchers established that it is possible – and commercially powerful – to infer what customers might want, even before they search. Their theories informed the design of recommendation engines across retail, streaming and social platforms.

Agent-led commerce builds on this tradition but extends it. Instead of recommending within a narrow context (“people who bought this also bought”), agents must manage multi-step goals, cross-category constraints and time-sensitive logistics. This requires integrating recommender logic with planning algorithms and conversational interfaces.

Software agents and multi-agent systems

The concept of a software agent – an autonomous entity that perceives its environment, makes decisions and acts on a user’s behalf – has deep roots in AI research. Theorists in this area include:

  • Michael Wooldridge, whose work on multi-agent systems formalised how agents can reason, cooperate and compete in complex environments.
  • Nick Jennings, who explored practical applications of autonomous agents in business, including negotiation, resource allocation and supply chain management.
  • Stuart Russell and Peter Norvig, whose widely adopted AI textbook set out the rational agent framework, defining intelligent behaviour as actions that maximise expected utility given beliefs about the world.

In this tradition, agents are not simply chat interfaces; they are decision-making entities with objectives, models of the environment and policies for action. Many of the recent ideas around “agentic” systems – where software components can autonomously plan, call tools, execute workflows and coordinate with other agents – derive conceptually from this line of research.

In retail, agentic commerce can be seen as a large-scale deployment of these ideas: shopper-facing agents negotiate between customer preferences, product availability, pricing, promotions and logistics, while back-end agents manage inventory, routing and labour scheduling.

Conversational AI and natural language understanding

The move from query-driven search to conversational agents has been enabled by advances in natural language processing (NLP), particularly large language models (LLMs). Theorists and practitioners in this domain include researchers who developed transformer architectures, attention mechanisms and large-scale pre-training techniques.

These models provide the linguistic and semantic fluency required for agents to engage in open-ended dialogue. However, in commerce they must be grounded in reliable data and constrained by business rules. Walmart’s AI strategy, for example, combines general-purpose language models with retail-specific systems like Wallaby, which is tuned to Walmart’s own data on catalogues, substitutions and seasonality.1

Behavioural economics and choice architecture

The design of agent-led experiences also draws on insights from behavioural economics and psychology. Researchers such as Daniel Kahneman, Amos Tversky, Richard Thaler and Cass Sunstein have shown how framing, defaults and choice architecture influence decisions.

In an agentic commerce environment, the agent effectively becomes the architect of the customer’s choice set. It decides which alternatives to present, how to explain trade-offs and what defaults to propose. The ethical and strategic implications are significant: the same technologies that can reduce friction and cognitive load can also be used to steer behaviour in subtle ways.

Leading thinkers in digital ethics and AI governance have therefore argued for transparency, contestability and human oversight in agentic systems. For retailers, this becomes a trust question: customers need to believe that the agent is working in their interests, not solely maximising short-term conversion or margin.

Google, Gemini and open standards for agentic commerce

On the technology platform side, Google has been a central theorist and practitioner in both search and AI. With Gemini, its family of multimodal models, Google is positioning AI not just as a backend enhancement to search results but as a front-end conversational partner.

In the joint Walmart – Google initiative, the companies highlight a “Universal Commerce Protocol” designed to let agents interact with merchants in a standardised way.3 While technical details continue to evolve, the ambition reflects a broader movement towards open or semi-open standards for how agents discover, price, bundle and purchase across multiple commerce ecosystems.

Sundar Pichai, Google’s CEO, has spoken of AI improving every step of the consumer journey, from discovery to delivery, and has explicitly framed the Walmart partnership as a step toward making “agentic commerce” a reality.3 This aligns with the longer arc of Google’s evolution from ten blue links to rich results, shopping tabs and now conversational, transaction-capable agents.

Strategic implications: trust, control and the future of retail interfaces

Furner’s quote hints at the strategic contest that agent-led commerce will intensify. Key questions include:

  • Who owns the interface? If customers increasingly begin journeys inside a small number of dominant agents (Gemini, ChatGPT, other assistants), traditional notions of direct traffic, branded apps and search engine optimisation will be reconfigured.
  • Who sets the rules? Universal protocols for agentic commerce could distribute power more widely, but the entities that define and maintain those protocols will have disproportionate influence.
  • How is trust earned and maintained? Mistakes in retail – wrong products, failed deliveries, billing errors – have tangible consequences. Agent-led systems must combine probabilistic AI outputs with robust guardrails, validation checks and escalation paths to humans.
  • How does work change? As McMillon has noted, and Furner will now operationalise, AI will touch every job in the organisation.4 Theorists of work and automation have long debated the balance between augmentation and substitution; agentic commerce will be one of the most visible test cases of those theories in practice.

Walmart’s own AI roadmap suggests a disciplined approach: build AI into the fabric of operations, prioritise store-first use cases, move carefully from assistants to agents with strict guardrails and develop platforms that can be standardised and scaled globally.1 Furner’s quote can thus be read as both a declaration of intent and a statement of competitive philosophy: in a world where AI agents mediate more and more of daily life, retailers must choose whether to be controlled by those agents or to help design them.

For customers, the promise is compelling: less time on search and comparison, more time on what the purchases enable in their lives. For retailers and technologists, the challenge is to build agents that are not only powerful and convenient but also aligned, transparent and worthy of long-term trust. That is the deeper context behind Furner’s assertion that the move from web and app search to agent-led commerce is not just another technology upgrade, but the “next great evolution in retail”.

References

1. https://www.mcmillandoolittle.com/walmarts-big-ai-bet-and-what-might-change-under-new-ceo-john-furner/

2. https://pulse2.com/walmart-and-google-turn-ai-discovery-into-effortless-shopping-experiences/

3. https://corporate.walmart.com/news/2026/01/11/walmart-and-google-turn-ai-discovery-into-effortless-shopping-experiences

4. https://www.digitalcommerce360.com/2026/01/08/how-walmart-is-using-ai/

5. https://www.nasdaq.com/press-release/walmart-and-google-turn-ai-discovery-effortless-shopping-experiences-2026-01-11

6. https://www.emarketer.com/content/walmart-tech-first-strategy-shapes-growth

7. https://www.futurecommerce.com/podcasts/predictions-2026-prepare-for-the-age-of-autonomy

read more
Term: Regression Analysis

Term: Regression Analysis

“Regression Analysis for forecasting is a sophisticated statistical and machine learning method used to predict a future value (the dependent variable) based on the mathematical relationship it shares with one or more other factors (the independent variables). – Regression Analysis

Regression analysis for forecasting is a statistical method that models the relationship between a dependent variable (the outcome to predict, such as future revenue) and one or more independent variables (predictors or drivers, like marketing spend or economic indicators), using a fitted mathematical equation to project future values based on historical data and scenario inputs.1,2,3

Core Definition and Mathematical Foundation

Regression analysis estimates how changes in independent variables ((X)) influence the dependent variable ((Y)). In its simplest form, linear regression, the model takes the equation:
[ Y = \beta<em>0 + \beta</em>1 X<em>1 + \beta</em>2 X<em>2 + \dots + \beta</em>n X<em>n + \epsilon ]
where (\beta0) is the intercept, (\betai) are coefficients representing the impact of each (Xi), and (\epsilon) is the error term.3,5 For forecasting, historical data trains the model to fit this equation, enabling predictions via interpolation (within data range) or extrapolation (beyond it), though extrapolation risks inaccuracy if assumptions like linearity or stable relationships fail.1,3

Key types include:

  • Simple linear regression: One predictor (e.g., sales vs. ad spend).2,5
  • Multiple regression: Multiple predictors, common in business for capturing complex drivers.1,2
    It overlaps with supervised machine learning, using labelled data to learn patterns for unseen predictions.2,3

Applications in Forecasting

Primarily used for prediction and scenario testing, it quantifies driver impacts (e.g., 10% lead increase boosts revenue by X%) and supports “what-if” analysis, outperforming trend-based methods by linking outcomes to controllable levers.1,4 Business uses include revenue projection, demand planning, and performance optimisation, but requires high-quality data, assumption checks (linearity, independence), and validation via holdout testing.1,6

Aspect Strengths Limitations
Use Cases Scenario planning, driver quantification, multi-year forecasts1,4 Sensitive to outliers, data quality; relationships may shift over time1,3
Vs. Alternatives Explains why via drivers (unlike time-series or trends)1 Needs statistical expertise; not ideal for short-term pipeline forecasts1

Best practices: Define outcomes/drivers, clean/align data, fit/validate models, operationalise with regular refreshers.1

Best Related Strategy Theorist: Carl Friedrich Gauss

The most foundational theorist linked to regression analysis is Carl Friedrich Gauss (1777–1855), the German mathematician and astronomer whose method of least squares (1809) underpins modern regression by minimising prediction errors to fit the best line through data points—essential for forecasting’s equation estimation.3

Biography: Born in Brunswick, Germany, to poor parents, Gauss displayed prodigious talent early, correcting his father’s payroll at age 3 and summing 1-to-100 instantly at 8. Supported by the Duke of Brunswick, he studied at Caroline College and the University of Göttingen, earning a PhD at 21. Gauss pioneered number theory (Disquisitiones Arithmeticae, 1801), invented the fast Fourier transform, advanced astronomy (predicting Ceres’ orbit via least squares), and contributed to physics (magnetism, geodesy). As director of Göttingen Observatory, he developed the Gaussian distribution (bell curve), vital for regression error modelling. Shy and perfectionist, he published sparingly but influenced fields profoundly; his work on least squares, published in Theoria Motus Corporum Coelestium, revolutionised data fitting for predictions, directly enabling regression’s forecasting power despite later refinements by Legendre and others.3

Gauss’s least squares principle remains core to strategy and business analytics, providing rigorous error-minimisation for reliable forecasts in volatile environments.1,3

References

1. https://www.pedowitzgroup.com/what-is-regression-analysis-forecasting

2. https://www.cake.ai/blog/regression-models-for-forecasting

3. https://en.wikipedia.org/wiki/Regression_analysis

4. https://www.qualtrics.com/en-gb/experience-management/research/regression-analysis/

5. https://www.marketingprofs.com/tutorials/forecast/regression.asp

6. https://www.ciat.edu/blog/regression-analysis/

read more
Quote: Pitchbook

Quote: Pitchbook

“In an effort to satisfy their investors’ thirst for distributions, some [PE] fund managers are selling their crown jewels now, even if it means giving up potential returns.” – Pitchbook –

Private equity (PE) fund managers are increasingly selling high-value “crown jewel” assets prematurely to meet investor demands for cash distributions amid a prolonged liquidity crunch, potentially sacrificing long-term upside.1,2

Context of the Quote

This observation from Pitchbook captures a core tension in the PE landscape as of late 2025, where general partners (GPs) face mounting pressure from limited partners (LPs) to return capital after years of subdued exits. Deal values reached $2.3 trillion by November 2025, on pace for the strongest year since 2021, yet distributions remain in a four-year drought extending into 2026.1,2 GPs are resorting to tools like continuation vehicles (CVs)—which now account for at least 20% of distributions as LPs opt to sell rather than roll—secondaries sales, NAV lending, and portfolio stake sales to manufacture liquidity.1,2,3 High-quality assets command premiums, skewing transaction stats upward, but GPs accept 11-20% discounts on long-held holdings to facilitate sales, especially for lower-quality or earlier investments retained post-2021.4 This “distribution drought” stems from a backlog of long-hold companies, valuation gaps, leverage constraints, and competition from patient capital like sovereign wealth funds and family offices, forcing even top assets out the door despite growth potential.3,4,6,7

Dry powder stands at $880 billion (US PE) to over $2.5 trillion globally, but deployment favors creative structures like carve-outs, take-privates, and evergreens—projected to hold 20% of private market capital within a decade—over traditional buyouts.1,3,6 Exits via IPOs and M&A are rebounding (volumes up 43% YoY), but remain muted relative to net asset values, with GPs prioritizing LP satisfaction over holding for peak returns.4,5 Middle-market firms, in particular, adopt cautious risk appetites, extending diligence and avoiding overpayment in a sellers’ market for quality deals.6

Backstory on Pitchbook

Pitchbook, the source of this quote, is a leading data and research provider on private capital markets, founded in 2007 and acquired by Morningstar in 2016. It tracks over 3 million companies, 2 million funds, and trillions in deal flow, offering benchmarks, valuations, and investor insights drawn from proprietary databases. Known for its rigorous analysis of PE trends—like liquidity pressures and GP-LP dynamics—Pitchbook’s reports influence institutional allocators and GPs. This quote likely emerges from their 2025-2026 market commentary, aligning with surveys showing GPs willing to discount assets to unlock cash amid LP impatience.4

Leading Theorists and Theorists on PE Liquidity and Distributions

The quote ties into foundational and contemporary theories on agency problems in PE (GPs vs. LPs misaligned incentives) and liquidity transformation in illiquid assets. Key figures include:

read more

Download brochure

Introduction brochure

What we do, case studies and profiles of some of our amazing team.

Download

Our latest podcasts on Spotify

Sign up for our newsletters - free

Global Advisors | Quantified Strategy Consulting