Select Page

News and Tools

Terms

 

A daily selection of business terms and their definitions / application.

Term: Jagged Edge of AI

Term: Jagged Edge of AI

“The “jagged edge of AI” refers to the inconsistent and uneven nature of current artificial intelligence, where models excel at some complex tasks (like writing code) but fail surprisingly at simpler ones, creating unpredictable performance gaps that require human oversight.” – Jagged Edge of AI

The “jagged edge” or “jagged frontier of AI” is the uneven boundary of current AI capability, where systems are superhuman at some tasks and surprisingly poor at others of seemingly similar difficulty, producing erratic performance that cannot yet replace human judgement and requires careful oversight.4,7

At this jagged edge, AI models can:

  • Excel at tasks like reading, coding, structured writing, or exam-style reasoning, often matching or exceeding expert-level performance.1,2,7
  • Fail unpredictably on tasks that appear simpler to humans, especially when they demand robust memory, context tracking, strict rule-following, or real-world common sense.1,2,4

This mismatch has several defining characteristics:

  • Jagged capability profile
    AI capability does not rise smoothly; instead, it forms a “wall with towers and recesses” – very strong in some directions (e.g. maths, classification, text generation), very weak in others (e.g. persistent memory, reliable adherence to constraints, nuanced social judgement).2,3,4
    Researchers label this pattern the “jagged technological frontier”: some tasks are easily done by AI, while others, though seemingly similar in difficulty, lie outside its capability.4,7

  • Sensitivity to small changes
    Performance can swing dramatically with minor changes in task phrasing, constraints, or context.4
    A model that handles one prompt flawlessly may fail when the instructions are reordered or slightly reworded, which makes behaviour hard to predict without systematic testing.

  • Bottlenecks and “reverse salients”
    The jagged shape creates bottlenecks: single weak spots (such as memory or long-horizon planning) that limit what AI can reliably automate, even when its raw intelligence looks impressive.2
    When labs solve one such bottleneck – a reverse salient – overall capability can suddenly lurch forward, reshaping the frontier while leaving new jagged edges elsewhere.2

  • Implications for work and organisation design
    Because capability is jagged, AI tends not to uniformly improve or replace jobs; instead it supercharges some tasks and underperforms on others, even within the same role.6,7
    Field experiments with consultants show large productivity and quality gains on tasks inside the frontier, but far less help – or even harm – on tasks outside it.7
    This means roles evolve towards managing and orchestrating AI across these edges: humans handle judgement, context, and exception cases, while AI accelerates pattern-heavy, structured work.2,4,6

  • Need for human oversight and “AI literacy”
    Because the frontier is jagged and shifting, users must continuously probe and map where AI is trustworthy and where it is brittle.4,8
    Effective use therefore requires AI literacy: knowing when to delegate, when to double-check, and how to structure workflows so that human review covers the weak edges while AI handles its “sweet spot” tasks.4,6,8

In strategic and governance terms, the jagged edge of AI is the moving boundary where:

  • AI is powerful enough to transform tasks and workflows,
  • but uneven and unpredictable enough that unqualified automation is risky,
  • creating a premium on hybrid human–AI systems, robust guardrails, and continuous testing.1,2,4

Strategy theorist: Ethan Mollick and the “Jagged Frontier”

The strategist most closely associated with the jagged edge/frontier of AI in practice and management thinking is Ethan Mollick, whose work has been pivotal in defining how organisations should navigate this uneven capability landscape.2,3,4,7

Relationship to the concept

  • The phrase “jagged technological frontier” originates in a field experiment by Dell’Acqua, Mollick, Ransbotham and colleagues, which analysed how generative AI affects the work of professional consultants.4,7
  • In that paper, they showed empirically that AI dramatically boosts performance on some realistic tasks while offering little benefit or even degrading performance on others, despite similar apparent difficulty – and they coined the term to capture that boundary.7
  • Mollick then popularised and extended the idea in widely read essays such as “Centaurs and Cyborgs on the Jagged Frontier” and later pieces on the shape of AI, jaggedness, bottlenecks, and salients, bringing the concept into mainstream management and strategy discourse.2,3,4

In his writing and teaching, Mollick uses the “jagged frontier” to:

  • Argue that jobs are not simply automated away; instead, they are recomposed into tasks that AI does, tasks that humans retain, and tasks where human–AI collaboration is superior.2,3
  • Introduce the metaphors of “centaurs” (humans and AI dividing tasks) and “cyborgs” (tightly integrated human–AI workflows) as strategies for operating on this frontier.3
  • Emphasise that the jagged shape creates both opportunities (rapid acceleration of some activities) and constraints (persistent need for human oversight and design), which leaders must explicitly map and manage.2,3,4

In this sense, Mollick functions as a strategy theorist of the jagged edge: he connects the underlying technical phenomenon (uneven capability) with organisational design, skills, and competitive advantage, offering a practical framework for firms deciding where and how to deploy AI.

Biography and relevance to AI strategy

  • Academic role
    Ethan Mollick is an Associate Professor of Management at the Wharton School of the University of Pennsylvania, specialising in entrepreneurship, innovation, and the impact of new technologies on work and organisations.7
    His early research focused on start-ups, crowdfunding and innovation processes, before shifting towards generative AI and its effects on knowledge work, where he now runs some of the most cited field experiments.

  • Research on AI and work
    Mollick has co-authored multiple studies examining how generative AI changes productivity, quality and inequality in real jobs.
    In the “Navigating the Jagged Technological Frontier” experiment, his team placed consultants in realistic tasks with and without AI and showed that:

  • For tasks inside AI’s frontier, consultants using AI were more productive (12.2% more tasks, 25.1% faster) and produced over 40% higher quality output.7

  • For tasks outside the frontier, the benefits were weaker or absent, highlighting the risk of over-reliance where AI is brittle.7
    This empirical demonstration is central to the modern understanding of the jagged edge as a strategic boundary rather than a purely technical curiosity.

  • Public intellectual and practitioner bridge
    Through his “One Useful Thing” publication and executive teaching, Mollick translates these findings into actionable guidance for leaders, including:

  • How to design workflows that align with AI’s jagged profile,

  • How to structure human–AI collaboration modes, and

  • How to build organisational capabilities (training, policies, experimentation) to keep pace as the frontier moves.2,3,4

  • Strategic perspective
    Mollick frames the jagged frontier as a continuously shifting strategic landscape:

  • Companies that map and exploit the protruding “towers” of AI strength can gain significant productivity and innovation advantages.

  • Those that ignore or misread the “recesses” – the weak edges – risk compliance failures, reputational harm, or operational fragility when they automate tasks that still require human judgement.2,4,7

For organisations grappling with the jagged edge of AI, Mollick’s work offers a coherent strategy lens: treat AI not as a monolithic capability but as a jagged, moving frontier; build hybrid systems that respect its limits; and invest in human skills and structures that can adapt as that edge advances and reshapes.

References

1. https://www.salesforce.com/blog/jagged-intelligence/

2. https://www.oneusefulthing.org/p/the-shape-of-ai-jaggedness-bottlenecks

3. https://www.oneusefulthing.org/p/centaurs-and-cyborgs-on-the-jagged

4. https://libguides.okanagan.bc.ca/c.php?g=743006&p=5383248

5. https://edrm.net/2024/10/navigating-the-ai-frontier-balancing-breakthroughs-and-blind-spots/

6. https://drphilippahardman.substack.com/p/defining-and-navigating-the-jagged

7. https://www.hbs.edu/faculty/Pages/item.aspx?num=64700

8. https://daedalusfutures.com/latest/f/life-at-the-jagged-edge-of-ai

read more
Term: Vibe coding

Term: Vibe coding

“Vibe coding is an AI-driven software development approach where users describe desired app features in natural language (the “vibe”), and a Large Language Model (LLM) generates the functional code.” – Vibe coding

Vibe coding is an AI-assisted software development technique where developers describe project goals or features in natural language prompts to a large language model (LLM), which generates the source code; the developer then evaluates functionality through testing and iteration without reviewing, editing, or fully understanding the code itself.1,2

This approach, distinct from traditional AI pair programming or code assistants, emphasises “giving in to the vibes” by focusing on outcomes, rapid prototyping, and conversational refinement rather than code structure or correctness.1,3 Developers act as prompters, guides, testers, and refiners, shifting from manual implementation to high-level direction—e.g., instructing an LLM to “create a user login form” for instant code generation.2 It operates in two levels: a tight iterative loop for refining specific code via feedback, and a broader lifecycle from concept to deployed app.2

Key characteristics include:

  • Natural language as input: Builds on the idea that “the hottest new programming language is English,” bypassing syntax knowledge.1
  • No code inspection: Accepting AI output blindly, verified only by execution results—programmer Simon Willison notes that reviewing code makes it mere “LLM as typing assistant,” not true vibe coding.1
  • Applications: Ideal for prototypes (e.g., Andrej Karpathy’s MenuGen), proofs-of-concept, experimentation, and automating repetitive tasks; less suited for production without added review.1,3
  • Comparisons to traditional coding:
Feature Traditional Programming Vibe Coding
Code Creation Manual line-by-line AI-generated from prompts2
Developer Role Architect, implementer, debugger Prompter, tester, refiner2,3
Expertise Required High (languages, syntax) Lower (functional goals)2
Speed Slower, methodical Faster for prototypes2
Error Handling Manual debugging Conversational feedback2
Maintainability Relies on skill and practices Depends on AI quality and testing2,3

Tools supporting vibe coding include Google AI Studio for prompt-to-app prototyping, Firebase Studio for app blueprints, Gemini Code Assist for IDE integration, GitHub Copilot, and Microsoft offerings—lowering barriers for non-experts while boosting pro efficiency.2,3 Critics highlight risks like unmaintainable code or security issues in production, stressing the need for human oversight.3,6

Best related strategy theorist: Andrej Karpathy. Karpathy coined “vibe coding” in February 2025 via a widely shared post, describing it as “fully giv[ing] in to the vibes, embrac[ing] exponentials, and forget[ting] that the code even exists”—exemplified by his MenuGen prototype, built entirely via LLM prompts with natural language feedback.1 This built on his 2023 claim that English supplants programming languages due to LLM prowess.1

Born in 1986 in Bratislava, Czechoslovakia (now Slovakia), Karpathy earned a BSc in Physics and Computer Science from University of British Columbia (2009), followed by an MSc (2011) and PhD (2015) in Computer Science from University of Toronto under Geoffrey Hinton, a neural networks pioneer. His doctoral work advanced recurrent neural networks (RNNs) for sequence modelling, including char-RNN for text generation.1 Post-PhD, he was a research scientist at Stanford (2015), then Director of AI at Tesla (2017–2022), leading Autopilot vision—scaling ConvNets to massive video data for self-driving cars. In 2023, he co-founded OpenAI’s Supercluster team for GPT training infrastructure before departing in 2024 to launch Eureka Labs (AI education) and advise AI firms.1,3 Karpathy’s career embodies scaling AI paradigms, making vibe coding a logical evolution: from low-level models to natural language commanding complex software, democratising development while embracing AI’s “exponentials.”1,2,3

References

1. https://en.wikipedia.org/wiki/Vibe_coding

2. https://cloud.google.com/discover/what-is-vibe-coding

3. https://news.microsoft.com/source/features/ai/vibe-coding-and-other-ways-ai-is-changing-who-can-build-apps-and-how/

4. https://www.ibm.com/think/topics/vibe-coding

5. https://aistudio.google.com/vibe-code

6. https://stackoverflow.blog/2026/01/02/a-new-worst-coder-has-entered-the-chat-vibe-coding-without-code-knowledge/

7. https://uxplanet.org/i-tested-5-ai-coding-tools-so-you-dont-have-to-b229d4b1a324

read more
Term: Context engineering

Term: Context engineering

“Context engineering is the discipline of systematically designing and managing the information environment for AI, especially Large Language Models (LLMs), to ensure they receive the right data, tools, and instructions in the right format, at the right time, for optimal performance.” – Context engineering

Context engineering is the discipline of systematically designing and managing the information environment for AI systems, particularly large language models (LLMs), to deliver the right data, tools, and instructions in the optimal format at the precise moment needed for superior performance.1,3,5

Comprehensive Definition

Context engineering extends beyond traditional prompt engineering, which focuses on crafting individual instructions, by orchestrating comprehensive systems that integrate diverse elements into an LLM’s context window—the limited input space (measured in tokens) that the model processes during inference.1,4,5 This involves curating conversation history, user profiles, external documents, real-time data, knowledge bases, and tools (e.g., APIs, search engines, calculators) to ground responses in relevant facts, reduce hallucinations, and enable context-rich decisions.1,2,3

Key components include:

  • Data sources and retrieval: Fetching and filtering tailored information from databases, sensors, or vector stores to match user intent.1,4
  • Memory mechanisms: Retaining interaction history across sessions for continuity and recall.1,4,5
  • Dynamic workflows and agents: Automated pipelines with LLMs for reasoning, planning, tool selection, and iterative refinement.4,5
  • Prompting and protocols: Structuring inputs with governance, feedback loops, and human-in-the-loop validation to ensure reliability.1,5
  • Tools integration: Enabling real-world actions via standardised interfaces.1,3,4

Gartner defines it as “designing and structuring the relevant data, workflows and environment so AI systems can understand intent, make better decisions and deliver contextual, enterprise-aligned outcomes—without relying on manual prompts.”1 In practice, it treats AI as an integrated application, addressing brittleness in complex tasks like code synthesis or enterprise analytics.1[11 from 1]

The Six Pillars of Context Engineering

As outlined in technical frameworks, these interdependent elements form the core architecture:4

  • Agents: Orchestrate tasks, decisions, and tool usage.
  • Query augmentation: Refine inputs for precision.
  • Retrieval: Connect to external knowledge bases.
  • Prompting: Guide model reasoning.
  • Memory: Preserve history and state.
  • Tools: Facilitate actions beyond generation.

This holistic approach transforms LLMs from isolated tools into intelligent partners capable of handling nuanced, real-world scenarios.1,3

Best Related Strategy Theorist: Christian Szegedy

Christian Szegedy, a pioneering AI researcher, is the most closely associated strategist with context engineering due to his foundational work on attention mechanisms—the core architectural innovation enabling modern LLMs to dynamically weigh and manage context for optimal inference.1[5 implied via LLM evolution]

Biography

Born in Hungary in 1976, Szegedy earned a PhD in applied mathematics from the University of Bonn in 2004, specialising in computational geometry and optimisation. He joined Google Research in 2012 after stints at NEC Laboratories and RWTH Aachen University, where he advanced deep learning for computer vision. Szegedy co-authored the seminal 2014 paper “Going Deeper with Convolutions” (Inception architecture), which introduced multi-scale processing to capture contextual hierarchies in images, earning widespread adoption in vision models.[context from knowledge, aligned with AI evolution in 1]

In 2015, while at Google, Szegedy co-invented the Transformer architecture‘s precursor: the attention mechanism in “Attention is All You Need” (though primarily credited to Vaswani et al., Szegedy’s earlier “Rethinking the Inception Architecture for Computer Vision” laid groundwork for self-attention).[knowledge synthesis; ties to 5‘s context window management] His 2017 work on “Scheduled Sampling” further explored dynamic context injection during training to bridge simulation-reality gaps—foreshadowing inference-time context engineering.

Relationship to Context Engineering

Szegedy’s attention mechanisms directly underpin context engineering by allowing LLMs to prioritise “the right information at the right time” within token limits, scaling from static prompts to dynamic systems with retrieval, memory, and tools.3,4,5 In agentic workflows, attention curates evolving contexts (e.g., filtering agent trajectories), as seen in Anthropic’s strategies.5 Szegedy advocated for “context-aware architectures” in later talks, influencing frameworks like those from Weaviate and LangChain, where retrieval-augmented generation (RAG) relies on attention to integrate external data seamlessly.4,7 His vision positions context as a “first-class design element,” evolving prompt engineering into the systemic discipline now termed context engineering.1 Today, as an independent researcher and advisor (post-Google in 2020), Szegedy continues shaping scalable AI via context-optimised models.

References

1. https://intuitionlabs.ai/articles/what-is-context-engineering

2. https://ramp.com/blog/what-is-context-engineering

3. https://www.philschmid.de/context-engineering

4. https://weaviate.io/blog/context-engineering

5. https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents

6. https://www.llamaindex.ai/blog/context-engineering-what-it-is-and-techniques-to-consider

7. https://blog.langchain.com/context-engineering-for-agents/

read more
Term: Prompt engineering

Term: Prompt engineering

“Prompt engineering is the practice of designing, refining, and optimizing the instructions (prompts) given to generative AI models to guide them into producing accurate, relevant, and desired outputs.” – Prompt engineering

Prompt engineering is the practice of designing, refining, and optimising instructions—known as prompts—given to generative AI models, particularly large language models (LLMs), to elicit accurate, relevant, and desired outputs.1,2,3,7

This process involves creativity, trial and error, and iterative refinement of phrasing, context, formats, words, and symbols to guide AI behaviour effectively, making applications more efficient, flexible, and capable of handling complex tasks.1,4,5 Without precise prompts, generative AI often produces generic or suboptimal responses, as models lack fixed commands and rely heavily on input structure to interpret intent.3,6

Key Benefits

  • Improved user experience: Users receive coherent, bias-mitigated responses even with minimal input, such as tailored summaries for legal documents versus news articles.1
  • Increased flexibility: Domain-neutral prompts enable reuse across processes, like identifying inefficiencies in business units without context-specific data.1
  • Subject matter expertise: Prompts direct AI to reference correct sources, e.g., generating medical differential diagnoses from symptoms.1
  • Enhanced security: Helps mitigate prompt injection attacks by refining logic in services like chatbots.2

Core Techniques

  • Generated knowledge prompting: AI first generates relevant facts (e.g., deforestation effects like climate change and biodiversity loss) before completing tasks like essay writing.1
  • Contextual refinement: Adding role-playing (e.g., “You are a sales assistant”), location, or specifics to vague queries like “Where to purchase a shirt.”1,5
  • Iterative testing: Trial-and-error to optimise for accuracy, often encapsulated in base prompts for scalable apps.2,5

Prompt engineering bridges end-user inputs with models, acting as a skill for developers and a step in AI workflows, applicable in fields like healthcare, cybersecurity, and customer service.2,5

Best Related Strategy Theorist: Lilian Weng

Lilian Weng, Director of Applied AI Safety at OpenAI, stands out as the premier theorist linking prompt engineering to strategic AI deployment. Her seminal 2023 blog post, “Prompt Engineering Guide”, systematised techniques like chain-of-thought prompting, few-shot learning, and self-consistency, providing a foundational framework that influenced industry practices and tools from AWS to Google Cloud.1,4

Weng’s relationship to the term stems from her role in advancing reliable LLM interactions post-ChatGPT’s 2022 launch. At OpenAI, she pioneered safety-aligned prompting strategies, addressing hallucinations and biases—core challenges in generative AI—making her work indispensable for enterprise-scale optimisation.1,2 Her guide emphasises strategic structuring (e.g., role assignment, step-by-step reasoning) as a “roadmap” for desired outputs, directly shaping modern definitions and techniques like generated knowledge prompting.1,4

Biography: Born in China, Weng earned a PhD in Machine Learning from McGill University (2015), focusing on computational neuroscience and reinforcement learning. She joined OpenAI in 2018 as a research scientist, rising to lead long-term safety efforts amid rapid AI scaling. Previously at Microsoft Research (2016–2018), she specialised in hierarchical RL for robotics. Weng’s contributions extend to publications on emergent abilities in LLMs and AI alignment, with her GitHub repository on prompting garnering millions of views. As of 2026, she continues shaping ethical AI strategies, blending theoretical rigour with practical engineering.7

References

1. https://aws.amazon.com/what-is/prompt-engineering/

2. https://www.coursera.org/articles/what-is-prompt-engineering

3. https://uit.stanford.edu/service/techtraining/ai-demystified/prompt-engineering

4. https://cloud.google.com/discover/what-is-prompt-engineering

5. https://www.oracle.com/artificial-intelligence/prompt-engineering/

6. https://genai.byu.edu/prompt-engineering

7. https://en.wikipedia.org/wiki/Prompt_engineering

8. https://www.ibm.com/think/topics/prompt-engineering

9. https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-prompt-engineering

10. https://github.com/resources/articles/what-is-prompt-engineering

read more
Term: Acquihire

Term: Acquihire

“An acquihire (acquisition + hire) is a business strategy where a company buys another, smaller company primarily for its talented employees, rather than its products or technology, often to quickly gain skilled teams.” – Acquihire –

An acquihire (a portmanteau of “acquisition” and “hire”) is a business strategy in which a larger company acquires a smaller firm, such as a startup, primarily to recruit its skilled employees or entire teams, rather than for its products, services, technology, or customer base.1,2,3,7 This approach enables rapid talent acquisition, often bypassing traditional hiring processes, while the acquired company’s offerings are typically deprioritised or discontinued post-deal.1,4,7

Key Characteristics and Process

Acquihires emphasise human capital over tangible assets, with the acquiring firm integrating the talent to fill skill gaps, drive innovation, or enhance competitiveness—particularly in tech sectors where specialised expertise like AI or engineering is scarce.1,2,6 The process generally unfolds in structured stages:

  • Identifying needs and targets: The acquirer conducts a skills gap analysis and scouts startups with aligned, high-performing teams via networks or advisors.2,3,6
  • Due diligence and negotiation: Focus shifts to talent assessment, cultural fit, retention incentives, and compensation, rather than product valuation; deals often include retention bonuses.3,6
  • Integration: Acquired employees transition into the larger firm, leveraging its resources for stability and scaled projects, though risks like cultural clashes or talent loss exist.1,3

For startups, acquihires provide an exit amid funding shortages, offering employees better opportunities, while acquirers gain entrepreneurial spirit and eliminate nascent competition.1,7

Strategic Benefits and Drawbacks

Aspect Benefits for Acquirer Benefits for Acquired Firm/Team Potential Drawbacks
Talent Access Swift onboarding of proven teams, infusing fresh ideas1,2 Stability, resources, career growth1 High costs if talent departs post-deal3
Speed Faster than individual hires4,6 Liquidity for founders/investors4 Products often shelved, eroding startup value7
Competition Neutralises rivals1,7 Access to larger markets1 Cultural mismatches3

Acquihires surged in Silicon Valley post-2008, with valuations tied to per-engineer pricing (e.g., $1–2 million per key hire).7

Best Related Strategy Theorist: Mark Zuckerberg

Mark Zuckerberg, CEO of Meta (formerly Facebook), stands out as the preeminent figure linked to acquihiring, having pioneered its strategic deployment to preserve startup agility within a scaling giant.7 His philosophy framed acquihires as dual tools for talent infusion and cultural retention, explicitly stating that “hiring entrepreneurs helped Facebook retain its start-up culture.”7

Biography and Backstory: Born in 1984 in New York, Zuckerberg co-founded Facebook in 2004 from his Harvard dorm, launching a platform that redefined social networking and grew to billions of users.7 By the late 2000s, as Facebook ballooned, it faced talent wars and innovation plateaus amid competition from nimble startups. Zuckerberg championed acquihires as a counter-strategy, masterminding over 50 such deals totalling hundreds of millions—exemplars include:

  • FriendFeed (2009, ~$50 million): Hired founder Bret Taylor (ex-Google, PayPal) as CTO, injecting search expertise.7
  • Chai Labs (2010): Recruited Gokul Rajaram for product innovation.7
  • Beluga (2010, ~$10 million): Team built Facebook Messenger, launching to 750 million users in months.7
  • Others like Drop.io (Sam Lessin) and Rel8tion (Peter Wilson), exceeding $67 million combined.7

These moves exemplified three motives Zuckerberg articulated: strategic (elevating founders to leadership), innovation (rapid feature development), and product enhancement.7 Unlike traditional M&A, his acquihires prioritised “acqui-hiring” founders into high roles, fostering Meta’s entrepreneurial ethos amid explosive growth. Critics note antitrust scrutiny (e.g., Instagram, WhatsApp debates), but Zuckerberg’s playbook influenced tech giants like Google and Apple, cementing acquihiring as a core talent strategy.7 His approach evolved with Meta’s empire-building, blending opportunism with long-term vision.

References

1. https://mightyfinancial.com/glossary/acquihire/

2. https://allegrow.com/acquire-hire-strategies/

3. https://velocityglobal.com/resources/blog/acquihire-process

4. https://visible.vc/blog/acquihire/

5. https://eqvista.com/acqui-hire-an-effective-talent-acquisition-strategy/

6. https://wowremoteteams.com/glossary-term/acqui-hiring/

7. https://en.wikipedia.org/wiki/Acqui-hiring

8. https://a16z.com/the-complete-guide-to-acquihires/

9. https://www.mascience.com/podcast/executing-acquihires

read more
Term: Tensor Processing Unit (TPU)

Term: Tensor Processing Unit (TPU)

“A Tensor Processing Unit (TPU) is an application-specific integrated circuit (ASIC) custom-designed by Google to accelerate machine learning (ML) and artificial intelligence (AI) workloads, especially those involving neural networks.” – Tensor Processing Unit (TPU)

A Tensor Processing Unit (TPU) is an application-specific integrated circuit (ASIC) custom-designed by Google to accelerate machine learning (ML) and artificial intelligence (AI) workloads, particularly those involving neural networks and matrix multiplication operations.1,2,4,6

Core Architecture and Functionality

TPUs excel at high-throughput, parallel processing of mathematical tasks such as multiply-accumulate (MAC) operations, which form the backbone of neural network training and inference. Each TPU features a Matrix Multiply Unit (MXU)—a systolic array of arithmetic logic units (ALUs), typically configured as 128×128 or 256×256 grids—that performs thousands of MAC operations per clock cycle using formats like 8-bit integers, BFloat16, or floating-point arithmetic.1,2,5,9 Supporting components include a Vector Processing Unit (VPU) for non-linear activations (e.g., ReLU, sigmoid) and High Bandwidth Memory (HBM) to minimise data bottlenecks by enabling rapid data retrieval and storage.2,5

Unlike general-purpose CPUs or even GPUs, TPUs are purpose-built for ML models relying on matrix processing, large batch sizes, and extended training periods (e.g., weeks for convolutional neural networks), offering superior efficiency in power consumption and speed for tasks like image recognition, natural language processing, and generative AI.1,3,6 They integrate seamlessly with frameworks such as TensorFlow, JAX, and PyTorch, processing input data as vectors in parallel before outputting results to ML models.1,4

Key Applications and Deployment

  • Cloud Computing: TPUs power Google Cloud Platform (GCP) services for AI workloads, including chatbots, recommendation engines, speech synthesis, computer vision, and products like Google Search, Maps, Photos, and Gemini.1,2,3
  • Edge Computing: Suitable for real-time ML at data sources, such as IoT in factories or autonomous vehicles, where high-throughput matrix operations are needed.1
    TPUs support both training (e.g., model development) and inference (e.g., predictions on new data), with pods scaling to thousands of chips for massive workloads.6,7

Development History

Google developed TPUs internally from 2015 for TensorFlow-based neural networks, deploying them in data centres before releasing versions for third-party use via GCP in 2018.1,4 Evolution includes shifts in array sizes (e.g., v1: 256×256 on 8-bit integers; later versions: 128×128 on BFloat16; v6: back to 256×256) and proprietary interconnects for enhanced scalability.5,6

Best Related Strategy Theorist: Norman Foster Ramsey

The most pertinent strategy theorist linked to TPU development is Norman Foster Ramsey (1915–2011), a Nobel Prize-winning physicist whose foundational work on quantum computing architectures and coherent manipulation of quantum states directly influenced the parallel processing paradigms underpinning TPUs. Ramsey’s concepts of separated oscillatory fields—a technique for precisely controlling atomic transitions using microwave pulses separated in space and time—paved the way for systolic arrays and matrix-based computation in specialised hardware, which TPUs exemplify through their MXU grids for simultaneous MAC operations.5 This quantum-inspired parallelism optimises energy efficiency and throughput, mirroring Ramsey’s emphasis on minimising decoherence (data loss) in high-dimensional systems.

Biography and Relationship to the Term: Born in Washington, D.C., Ramsey earned his PhD from Columbia University in 1940 under I.I. Rabi, focusing on molecular beams and magnetic resonance. During World War II, he contributed to radar and atomic bomb research at MIT’s Radiation Laboratory. Post-war, as a Harvard professor (1947–1986), he pioneered the Ramsey method of separated oscillatory fields, earning the 1989 Nobel Prize in Physics for enabling atomic clocks and quantum computing primitives. His 1950s–1960s work on quantum state engineering informed ASIC designs for tensor operations; Google’s TPU team drew on these principles for weight-stationary systolic arrays, reducing data movement akin to Ramsey’s coherence preservation. Ramsey advised early quantum hardware initiatives at Harvard and Los Alamos, influencing strategists in custom silicon for AI acceleration. He lived to 96, authoring over 250 papers and mentoring figures in computational physics.1,5

References

1. https://www.techtarget.com/whatis/definition/tensor-processing-unit-TPU

2. https://builtin.com/articles/tensor-processing-unit-tpu

3. https://www.iterate.ai/ai-glossary/what-is-tpu-tensor-processing-unit

4. https://en.wikipedia.org/wiki/Tensor_Processing_Unit

5. https://blog.bytebytego.com/p/how-googles-tensor-processing-unit

6. https://cloud.google.com/tpu

7. https://docs.cloud.google.com/tpu/docs/intro-to-tpu

8. https://www.youtube.com/watch?v=GKQz4-esU5M

9. https://lightning.ai/docs/pytorch/1.6.2/accelerators/tpu.html

read more
Term: Forward Deployed Engineer (FDE)

Term: Forward Deployed Engineer (FDE)

“An AI Forward Deployed Engineer (FDE) is a technical expert embedded directly within a client’s environment to implement, customise, and operationalize complex AI/ML products, acting as a bridge between core engineering and customer needs.” – Forward Deployed Engineer (FDE)

Forward Deployed Engineer (FDE)

A Forward Deployed Engineer (FDE) is a highly skilled technical specialist embedded directly within a client’s environment to implement, customise, deploy, and operationalise complex software or AI/ML products, serving as a critical bridge between core engineering teams and customer-specific needs.1,2,5 This hands-on, customer-facing role combines software engineering, solution architecture, and technical consulting to translate business workflows into production-ready solutions, often involving rapid prototyping, integrations with legacy systems (e.g., CRMs, ERPs, HRIS), and troubleshooting in real-world settings.1,2,3

Key Responsibilities

  • Collaborate directly with enterprise customers to understand workflows, scope use cases, and design tailored AI agent or GenAI solutions.1,3,5
  • Lead deployment, integration, and configuration in diverse environments (cloud, on-prem, hybrid), including APIs, OAuth, webhooks, and production-grade interfaces.1,2,4
  • Build end-to-end workflows, operationalise LLM/SLM-based systems (e.g., RAG, vector search, multi-agent orchestration), and iterate for scalability, performance, and user adoption.1,5,6
  • Act as a liaison to product/engineering teams, feeding back insights, proposing features, and influencing roadmaps while conducting workshops, audits, and go-lives.1,3,7
  • Debug live issues, document implementations, and ensure compliance with IT/security requirements like data residency and logging.1,2

Essential Skills and Qualifications

  • Technical Expertise: Proficiency in Python, Node.js, or Java; cloud platforms (AWS, Azure, GCP); REST APIs; and GenAI tools (e.g., LangChain, HuggingFace, DSPy).1,6
  • AI/ML Fluency: Experience with LLMs, agentic workflows, fine-tuning, Text2SQL, and evaluation/optimisation for production.5,6,7
  • Soft Skills: Strong communication for executive presentations, problem-solving in ambiguous settings, and willingness for international travel (e.g., US/Europe).1,2
  • Experience: Typically 10+ years in enterprise software, with exposure to domains like healthcare, finance, or customer service; startup or consulting background preferred.1,7

FDEs differ from traditional support or sales engineering roles by writing production code, owning outcomes like a “hands-on AI startup CTO,” and enabling scalable AI delivery in complex enterprises.2,5,7 In the AI era, they excel as architects of agentic operations, leveraging AI for diagnostics, automation, and pattern identification to accelerate value realisation.7

Best Related Strategy Theorist: Clayton Christensen

The concept of the Forward Deployed Engineer aligns most closely with Clayton Christensen (1947–2020), the Harvard Business School professor renowned for pioneering disruptive innovation theory, which emphasises how customer-embedded adaptation drives technology adoption and market disruption—mirroring the FDE’s role in customising complex AI products for real-world fit.2,7

Biography and Backstory: Born in Salt Lake City, Utah, Christensen earned a BA in economics from Brigham Young University, an MPhil from Oxford as a Rhodes Scholar, and a DBA from Harvard. After consulting at BCG and founding Innosight, he joined Harvard faculty in 1992, authoring seminal works like The Innovator’s Dilemma (1997), which argued that incumbents fail by ignoring “disruptive” technologies that initially underperform but evolve to dominate via iterative, customer-proximate improvements.8 His theories stemmed from studying disk drives and steel minimills, revealing how “listening to customers” in sustained innovation traps firms, while forward-deployed experimentation in niche contexts enables breakthroughs.

Relationship to FDE: Christensen’s framework directly informs the FDE model, popularised by Palantir (inspired by military “forward deployment”) and scaled in AI firms like Scale AI and Databricks.5,6 FDEs embody disruptive deployment: embedded in client environments, they prototype and iterate solutions (e.g., GenAI agents) that bypass headquarters silos, much like disruptors refine products through “jobs to be done” in ambiguous, high-stakes settings.2,5,7 Christensen advised Palantir-like enterprises on scaling via such roles, stressing that technical experts “forward-deployed” accelerate value by solving unspoken problems—echoing FDE skills in rapid problem identification and agentic orchestration.7 His later work on AI ethics and enterprise transformation (e.g., Competing Against Luck, 2016) underscores FDEs’ strategic pivot: turning customer feedback into product evolution, ensuring AI scales disruptively rather than generically.1,3

References

1. https://avaamo.ai/forward-deployed-engineer/

2. https://futurense.com/blog/fde-forward-deployed-engineers

3. https://theloops.io/career/forward-deployed-ai-engineer/

4. https://scale.com/careers/4593571005

5. https://jobs.lever.co/palantir/636fc05c-d348-4a06-be51-597cb9e07488

6. https://www.databricks.com/company/careers/professional-services-operations/ai-engineer—fde-forward-deployed-engineer-8024010002

7. https://www.rocketlane.com/blogs/forward-deployed-engineer

8. https://thomasotter.substack.com/p/wtf-is-a-forward-deployed-engineer

9. https://www.salesforce.com/blog/forward-deployed-engineer/

read more
Term: Davos

Term: Davos

“Davos refers to the annual, invitation-only meeting of global political, business, academic, and civil society leaders held every January in the Swiss Alpine town of Davos-Klosters. It acts as a premier, high-profile platform for discussing pressing global economic, social, and political issues.” – Davos

Davos represents far more than a simple annual conference; it embodies a transformative model of global governance and problem-solving that has evolved significantly since its inception. Held each January in the Swiss Alpine resort town of Davos-Klosters, this invitation-only gathering convenes over 2,500 leaders spanning business, government, civil society, academia, and media to address humanity’s most pressing challenges.1,7

The Evolution and Purpose of Davos

Founded in 1971 by German engineer Klaus Schwab as the European Management Symposium, Davos emerged from a singular vision: that businesses should serve all stakeholders-employees, suppliers, communities, and the broader society-rather than shareholders alone.1 This foundational concept, known as stakeholder theory, remains central to the World Economic Forum’s mission today.1 The organisation formalised this philosophy through the Davos Manifesto in 1973, which was substantially renewed in 2020 to address the challenges of the Fourth Industrial Revolution.1,3

The Forum’s evolution reflects a fundamental shift in how global problems are addressed. Rather than relying solely on traditional nation-state institutions established after the Second World War-such as the International Monetary Fund, World Bank, and United Nations-Davos pioneered what scholars term a “Networked Institution.”2 This model brings together independent parties from civil society, the private sector, government, and individual stakeholders who perceive shared global problems and coordinate their activities to make progress, rather than working competitively in isolation.2

Tangible Impact and Policy Outcomes

Davos has demonstrated concrete influence on global affairs. In 1988, Greece and Türkiye averted armed conflict through an agreement finalised at the meeting.1 The 1990s witnessed a historic handshake that helped end apartheid in South Africa, and the platform served as the venue for announcing the UN Global Compact, calling on companies to align operations with human rights principles.1 More recently, in 2023, the United States announced a new development fund programme at Davos, and global CEOs agreed to support a free trade agreement in Africa.1 The Forum also launched Gavi, the vaccine alliance, in 2000-an initiative that now helps vaccinate nearly half the world’s children and played a crucial role in delivering COVID-19 vaccines to vulnerable countries.6

The Davos Manifesto and Stakeholder Capitalism

The 2020 Davos Manifesto formally established that the World Economic Forum is guided by stakeholder capitalism, a concept positing that corporations should deliver value not only to shareholders but to all stakeholders, including employees, society, and the planet.3 This framework commits businesses to three interconnected responsibilities:

  • Acting as stewards of the environmental and material universe for future generations, protecting the biosphere and championing a circular, shared, and regenerative economy5
  • Responsibly managing near-term, medium-term, and long-term value creation in pursuit of sustainable shareholder returns that do not sacrifice the future for the present5
  • Fulfilling human and societal aspirations as part of the broader social system, measuring performance not only on shareholder returns but also on environmental, social, and governance objectives5

Contemporary Relevance and Structure

The World Economic Forum operates as an international not-for-profit organisation headquartered in Geneva, Switzerland, with formal institutional status granted by the Swiss government.2,3 Its mission is to improve the state of the world through public-private cooperation, guided by core values of integrity, impartiality, independence, respect, and excellence.8 The Forum addresses five interconnected global challenges: Growth, Geopolitics, Technology, People, and Planet.8

Davos functions as the touchstone event within the Forum’s year-round orchestration of leaders from civil society, business, and government.2 Beyond the annual meeting, the organisation maintains continuous engagement through year-round communities spanning industries, regions, and generations, transforming ideas into action through initiatives and dialogues.4 The 2026 meeting, themed “A Spirit Of Dialogue,” emphasises advancing cooperation to address global issues, exploring the impact of innovation and emerging technologies, and promoting inclusive, sustainable approaches to human capital development.7

Klaus Schwab: The Architect of Davos

Klaus Schwab (born 1938) stands as the visionary founder and defining intellectual force behind Davos and the World Economic Forum. A German engineer and economist educated at the University of Bern and Harvard Business School, Schwab possessed an unusual conviction: that business leaders bore responsibility not merely to shareholders but to society writ large. This belief, radical for the early 1970s, crystallised into the founding of the European Management Symposium in 1971.

Schwab’s relationship with Davos transcends institutional leadership; he fundamentally shaped its philosophical architecture. His stakeholder theory challenged the prevailing shareholder primacy model that dominated Western capitalism, proposing instead that corporations exist within complex ecosystems of interdependence. This vision proved prescient, gaining mainstream acceptance only decades later as environmental concerns, social inequality, and governance failures exposed the limitations of pure shareholder capitalism.

Beyond founding the Forum, Schwab authored influential works including “The Fourth Industrial Revolution” (2016), a concept he coined to describe the convergence of digital, biological, and physical technologies reshaping society.1 His intellectual contributions extended the Forum’s reach from a business conference into a comprehensive platform addressing geopolitical tensions, technological disruption, and societal transformation. Schwab’s personal diplomacy-his ability to convene adversaries and facilitate dialogue-became embedded in Davos’s culture, establishing it as a neutral space where competitors and rivals could engage constructively.

Schwab’s legacy reflects a particular European sensibility: the belief that enlightened capitalism, properly structured around stakeholder interests, could serve as a force for global stability and progress. Whether one views this as visionary or naïve, his influence on contemporary governance models and corporate responsibility frameworks remains substantial. The expansion of Davos from a modest gathering of European executives to a global institution addressing humanity’s most complex challenges represents perhaps the most tangible measure of Schwab’s impact on twenty-first-century global affairs.

References

1. https://www.weforum.org/stories/2024/12/davos-annual-meeting-everything-you-need-to-know/

2. https://www.weforum.org/stories/2016/01/the-meaning-of-davos/

3. https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-davos-and-the-world-economic-forum

4. https://www.weforum.org/about/who-we-are/

5. https://en.wikipedia.org/wiki/World_Economic_Forum

6. https://www.zurich.com/media/magazine/2022/what-is-davos-your-guide-to-the-world-economic-forums-annual-meeting

7. https://www.oliverwyman.com/our-expertise/events/world-economic-forum-davos.html

8. https://www.weforum.org/about/world-economic-forum/

read more
Term: Language Processing Unit (LPU)

Term: Language Processing Unit (LPU)

“A Language Processing Unit (LPU) is a specialized processor designed specifically to accelerate tasks related to natural language processing (NLP) and the inference of large language models (LLMs). It is a purpose-built chip engineered to handle the unique demands of language tasks.” – Language Processing Unit (LPU)

A Language Processing Unit (LPU) is a specialised processor purpose-built to accelerate natural language processing (NLP) tasks, particularly the inference phase of large language models (LLMs), by optimising sequential data handling and memory bandwidth utilisation.1,2,3,4

Core Definition and Purpose

LPUs address the unique computational demands of language-based AI workloads, which involve sequential processing of text data—such as tokenisation, attention mechanisms, sequence modelling, and context handling—rather than the parallel computations suited to graphics processing units (GPUs).1,4,6 Unlike general-purpose CPUs (flexible but slow for deep learning) or GPUs (excellent for matrix operations and training but inefficient for NLP inference), LPUs prioritise low-latency, high-throughput inference for pre-trained LLMs, achieving up to 10x greater energy efficiency and substantially faster speeds.3,6

Key differentiators include:

  • Sequential optimisation: Designed for transformer-based models where data flows predictably, unlike GPUs’ parallel “hub-and-spoke” model that incurs data paging overhead.1,3,4
  • Deterministic execution: Every clock cycle is predictable, eliminating resource contention for compute and bandwidth.3
  • High scalability: Supports seamless chip-to-chip data “conveyor belts” without routers, enabling near-perfect scaling in multi-device systems.2,3
Processor Key Strengths Key Weaknesses Best For
CPU Flexible, broadly compatible Limited parallelism; slow for LLMs General tasks
GPU Parallel matrix operations; training support Inefficient sequential NLP inference Broad AI workloads
LPU Sequential NLP optimisation; fast inference; efficient memory Emerging; limited beyond language tasks LLM inference

6

Architectural Features

LPUs typically employ a Tensor Streaming Processor (TSP) architecture, featuring software-controlled data pipelines that stream instructions and operands like an assembly line.1,3,7 Notable components include:

  • Local Memory Unit (LMU): Multi-bank register file for high-bandwidth scalar-vector access.2
  • Custom Instruction Set Architecture (ISA): Covers memory access (MEM), compute (COMP), networking (NET), and control instructions, with out-of-order execution for latency reduction.2
  • Expandable synchronisation links: Hide data sync overhead in distributed setups, yielding up to 1.75× speedup when doubling devices.2
  • No external memory like HBM; relies on on-chip SRAM (e.g., 230MB per chip) and massive core integration for billion-parameter models.2

Proprietary implementations, such as those in inference engines, maximise bandwidth utilisation (up to 90%) for high-speed text generation.1,2,3

Best Related Strategy Theorist: Jonathan Ross

The foremost theorist linked to the LPU is Jonathan Ross, founder and CEO of Groq, the pioneering company that invented and commercialised the LPU as a new processor category in 2016.1,3,4 Ross’s strategic vision reframed AI hardware strategy around deterministic, assembly-line architectures tailored to LLM inference bottlenecks—compute density and memory bandwidth—shifting from GPU dominance to purpose-built sequential processing.3,5,7

Biography and Relationship to LPU

Born in the United States, Ross earned a PhD in Applied Physics from Stanford University, where he specialised in machine learning acceleration and novel compute architectures. Early in his career, he co-founded Google Brain (now part of Google DeepMind) in 2011, leading hardware innovations like the Google Tensor Processing Unit (TPU)—the first ASIC for ML inference, which influenced hyperscale AI by prioritising efficiency over versatility.[3 implied via Groq context]

In 2016, Ross left Google to establish Groq (initially named Rebellious Computing, rebranded in 2017), driven by the insight that GPUs were suboptimal for the emerging era of LLMs requiring ultra-low-latency inference.3,7 He strategically positioned the LPU as a “new class of processor,” introducing the TSP in 2023 via GroqCloud™, which powers real-time AI applications at speeds unattainable by GPUs.1,3 Ross’s backstory reflects a theorist-practitioner approach: his TPU experience exposed GPU limitations in sequential workloads, leading to LPU’s conveyor-belt determinism and scalability—core to Groq’s market disruption, including partnerships for embedded AI.2,3 Under his leadership, Groq raised over $1 billion in funding by 2025, validating LPU as a strategic pivot in AI infrastructure.3,4 Ross continues to advocate LPU’s role in democratising fast, cost-effective inference, authoring key publications and demos that benchmark its superiority.3,7

References

1. https://datanorth.ai/blog/gpu-lpu-npu-architectures

2. https://arxiv.org/html/2408.07326v1

3. https://groq.com/blog/the-groq-lpu-explained

4. https://www.purestorage.com/knowledge/what-is-lpu.html

5. https://www.turingpost.com/p/fod41

6. https://www.geeksforgeeks.org/nlp/what-are-language-processing-units-lpus/

7. https://blog.codingconfessions.com/p/groq-lpu-design

read more
Term: GPU

Term: GPU

“A Graphics Processing Unit (GPU) is a specialised processor designed for parallel computing tasks, excelling at handling thousands of threads simultaneously, unlike CPUs which prioritise sequential processing. It is widely used for AI.” – GPU

A Graphics Processing Unit (GPU) is a specialised electronic circuit designed to accelerate graphics rendering, image processing, and parallel mathematical computations by executing thousands of simpler operations simultaneously across numerous cores.1,2,4,6

Core Characteristics and Architecture

GPUs excel at parallel processing, dividing tasks into subsets handled concurrently by hundreds or thousands of smaller, specialised cores, in contrast to CPUs which prioritise sequential execution with fewer, more versatile cores.1,3,5,7 This architecture includes dedicated high-bandwidth memory (e.g., GDDR6) for rapid data access, enabling efficient handling of compute-intensive workloads like matrix multiplications essential for 3D graphics, video editing, and scientific simulations.2,5 Originally developed for rendering realistic 3D scenes in games and films, GPUs have evolved into programmable devices supporting general-purpose computing (GPGPU), where they process vector operations far faster than CPUs for suitable applications.1,6

Historical Evolution and Key Applications

The modern GPU emerged in the 1990s, with Nvidia’s GeForce 256 in 1999 marking the first chip branded as a GPU, transforming fixed-function graphics hardware into flexible processors capable of shaders and custom computations.1,6 Today, GPUs power:

  • Gaming and media: High-resolution rendering and video processing.4,7
  • AI and machine learning: Accelerating neural networks via parallel floating-point operations, outperforming CPUs by orders of magnitude.1,3,5
  • High-performance computing (HPC): Data centres, blockchain, and simulations.1,2

Unlike neural processing units (NPUs), which optimise for low-latency AI with brain-like efficiency, GPUs prioritise raw parallel throughput for graphics and broad compute tasks.1

Best Related Strategy Theorist: Jensen Huang

Jensen Huang, co-founder, president, and CEO of Nvidia Corporation, is the preeminent figure linking GPUs to strategic technological dominance, having pioneered their shift from graphics to AI infrastructure.1

Biography: Born in 1963 in Taiwan, Huang immigrated to the US as a child, earning a BS in electrical engineering from Oregon State University (1984) and an MS from Stanford (1992). In 1993, at age 30, he co-founded Nvidia with Chris Malachowsky and Curtis Priem using $40,000, initially targeting 3D graphics acceleration amid the PC gaming boom. Under his leadership, Nvidia released the GeForce 256 in 1999—the first GPU—revolutionising real-time rendering and establishing market leadership.1,6 Huang’s strategic foresight extended GPUs beyond gaming via CUDA (2006), a platform enabling GPGPU for general computing, unlocking AI applications like deep learning.2,6 By 2026, Nvidia’s GPUs dominate AI training (e.g., via H100/H200 chips), propelling its market cap beyond $3 trillion and Huang’s net worth over $100 billion, making him the world’s richest person at times. His “all-in” bets—pivoting to AI during crypto winters and data centre shifts—exemplify visionary strategy, blending hardware innovation with ecosystem control (e.g., cuDNN libraries).1,5 Huang’s relationship to GPUs is foundational: as Nvidia’s architect, he defined their parallel architecture, foreseeing AI utility decades ahead, positioning GPUs as the “new CPU” for the AI era.3

References

1. https://www.ibm.com/think/topics/gpu

2. https://aws.amazon.com/what-is/gpu/

3. https://kempnerinstitute.harvard.edu/news/graphics-processing-units-and-artificial-intelligence/

4. https://www.arm.com/glossary/gpus

5. https://www.min.io/learn/graphics-processing-units

6. https://en.wikipedia.org/wiki/Graphics_processing_unit

7. https://www.supermicro.com/en/glossary/gpu

8. https://www.intel.com/content/www/us/en/products/docs/processors/what-is-a-gpu.html

read more
Term: K-shaped economy

Term: K-shaped economy

“A “K-shaped economy” describes a recovery or economic state where different segments of the population, industries, or wealth levels diverge drastically, resembling the letter ‘K’ on a graph: one part shoots up (wealthy, tech, capital owners), while another stagnates.” – K-shaped economy –

A K-shaped economy describes an uneven economic recovery or state following a downturn, where different segments—such as high-income earners, tech sectors, large corporations, and asset owners—experience strong growth (the upward arm of the ‘K’), while low-income groups, small businesses, low-skilled workers, younger generations, and debt-burdened households stagnate or decline (the downward arm).1,2,3,4

Key Characteristics

This divergence manifests across multiple dimensions:

  • Income and wealth levels: Higher-income individuals (top 10-20%) drive over 50% of consumption, benefiting from rising asset prices (e.g., stocks, real estate), while lower-income households face stagnating wages, unemployment, and delinquencies.3,4,6,7
  • Industries and sectors: Tech giants (e.g., ‘Magnificent 7’), AI infrastructure, and video conferencing boom, whereas tourism, small businesses, and labour-intensive sectors struggle due to high borrowing costs and weak demand.2,5,8
  • Generational and geographic splits: Younger consumers with debt face financial strain, contrasting with older, wealthier groups; urban tech hubs thrive while others lag.1,3
  • Policy influences: Post-2008 quantitative easing and pandemic fiscal measures favoured asset owners over broad growth, exacerbating inequality; central banks like the Federal Reserve face challenges from misleading unemployment data and uneven inflation.3,5

The pattern, prominent after the COVID-19 recession, contrasts with V-shaped (swift, even rebound) or U-shaped (gradual) recoveries, complicating stimulus efforts.2,4

Historical Context and Examples

  • Originated in discussions during the 2020 pandemic, popularised on social media and by analysts like Lisa D. Cook (Federal Reserve Governor).4
  • Reinforced by events like the 2008 financial crisis, where liquidity flooded assets without proportional wage growth.5
  • In 2025, it persists with AI-driven stock gains for the wealthy, minimal job creation for others, and corporate resilience (e.g., fixed-rate debt for S&P 500 firms vs. floating-rate pain for small businesses).1,5,8

Best Related Strategy Theorist: Joseph Schumpeter

The most apt theorist linked to the K-shaped economy is Joseph Schumpeter (1883–1950), whose concept of creative destruction directly underpins one key mechanism: recessions enable new industries and technologies to supplant outdated ones, fostering divergent recoveries.2

Biography

Born in Triesch, Moravia (now Czech Republic), Schumpeter studied law and economics in Vienna, earning a doctorate in 1906. He taught at universities in Czernowitz, Graz, and Bonn, becoming Austria’s finance minister briefly in 1919 amid post-World War I turmoil. Exiled after the Nazis annexed Austria, he joined Harvard University in 1932, where he wrote seminal works until retiring in 1949. A polymath influenced by Marx, Walras, and Weber, Schumpeter predicted capitalism’s self-undermining tendencies through innovation and bureaucracy.2

Relationship to the Term

Schumpeter argued that capitalism thrives via creative destruction—the “perennial gale” where entrepreneurs innovate, destroying old structures (e.g., tourism during COVID) and birthing new ones (e.g., video conferencing, AI).2 In a K-shaped context, this explains why tech and capital-intensive sectors surge while legacy industries falter, amplified by policies favouring winners. Unlike uniform recoveries, his framework predicts inherent bifurcation, as seen post-2008 and pandemics, where asset markets outpace labour markets—echoing modern analyses of uneven growth.2,5 Schumpeter’s prescience positions him as the foundational strategist for navigating such divides through innovation policy.

References

1. https://www.equifax.com/business/blog/-/insight/article/the-k-shaped-economy-what-it-means-in-2025-and-how-we-got-here/

2. https://corporatefinanceinstitute.com/resources/economics/k-shaped-recovery/

3. https://am.vontobel.com/en/insights/k-shaped-economy-presents-challenges-for-the-federal-reserve

4. https://finance-commerce.com/2025/12/k-shaped-economy-inequality-us/

5. https://www.pinebridge.com/en/insights/investment-strategy-insights-reflexivity-and-the-k-shaped-economy

6. https://www.alliancebernstein.com/corporate/en/insights/economic-perspectives/the-k-shaped-economy.html

7. https://www.mellon.com/insights/insights-articles/the-k-shaped-drift.html

8. https://www.morganstanley.com/insights/articles/k-shaped-economy-investor-guide-2025

read more
Term: Strategy

Term: Strategy

“Strategy is the art of radical selection, where you identify the “vital few” forces – the 20% of activities, products, or customers that generate 80% of your value – and anchor them in a unique and valuable position that is difficult for rivals to imitate.” – Strategy

Strategy is the art of radical selection, entailing the identification and prioritisation of the “vital few” forces—typically the 20% of activities, products, or customers that deliver 80% of value—and embedding them within a unique, valuable position that rivals struggle to replicate.

This definition draws on the Pareto principle (or 80/20 rule), which posits that a minority of inputs generates the majority of outputs, applied strategically to focus resources for competitive advantage. Radical selection demands ruthless prioritisation, rejecting marginal efforts to create imitable barriers such as proprietary processes, network effects, or brand loyalty. In practice, it involves auditing operations to isolate high-impact elements, then aligning the organisation around them—eschewing diversification for concentrated excellence. For instance, firms might discontinue underperforming product lines or customer segments to double down on core strengths, fostering sustainable differentiation amid competition.3,5

Key Elements of Radical Selection

  • Identification of the “Vital Few”: Analyse data to pinpoint the 20% driving 80% of revenue, profit, or growth; this echoes exploration in radical innovation, targeting novel opportunities over incremental gains.3
  • Anchoring in a Unique Position: Secure these forces in a defensible niche, leveraging creativity and risk acceptance inherent to strategic art, where choices fuse power with imagination to outmanoeuvre rivals.5
  • Difficulty to Imitate: Build moats through repetition with deviation—reconfiguring conventions internally to resist replication, akin to disidentification strategies that transform from within.1

Best Related Strategy Theorist: Richard Koch

Richard Koch, a pre-eminent proponent of the 80/20 principle in strategy, provides the foundational intellectual backbone for this concept of radical selection. His seminal work, The 80/20 Principle: The Secret to Achieving More with Less (1997, updated editions since), explicitly frames strategy as exploiting the “vital few”—the disproportionate 20% of factors yielding 80% of results—to achieve outsized success.

Biography and Backstory

Born in 1950 in London, Koch graduated from Oxford University with a degree in Philosophy, Politics, and Economics, later earning an MBA from Harvard Business School. He began his career at Bain & Company (1978–1980), rising swiftly in management consulting, then co-founded L.E.K. Consulting in 1983, where he specialised in corporate strategy and turnarounds. Koch advised blue-chip firms on radical pruning—divesting non-core assets to focus on high-yield segments—drawing early insights into Pareto imbalances from client data showing most profits stemmed from few products or customers.

In the 1990s, as an independent investor and author, Koch applied these lessons to his own ventures, achieving billionaire status through stakes in firms like Filofax (which he revitalised via 80/20 focus) and Betfair (early investor). His 80/20 philosophy evolved from Vilfredo Pareto’s 1896 observation of wealth distribution (80% owned by 20%) and Joseph Juran’s quality management adaptations, but Koch radicalised it for strategy. He argued that businesses thrive by systematically ignoring the trivial many, selecting “star” activities for exponential growth—a direct precursor to the query’s definition.

Koch’s relationship to radical selection is intimate: he popularised it as a strategic art form, blending empirical analysis with bold choice. In Living the 80/20 Way (2004) and The 80/20 Manager (2007), he extends it to personal and corporate realms, warning against “spread-thin” mediocrity. Critics note its simplicity risks oversimplification, yet its prescience aligns with modern lean strategies; Koch remains active, mentoring via Koch Education.3,5

References

1. https://direct.mit.edu/artm/article/10/3/8/109489/What-is-Radical

2. https://dariollinares.substack.com/p/the-art-of-radical-thinking?selection=863e7a98-7166-4689-9e3c-6434f064c055

3. https://www.timreview.ca/article/1425

4. https://selvajournal.org/article/ideology-strategy-aesthetics/

5. https://theforge.defence.gov.au/sites/default/files/2024-11/On%20Strategic%20Art%20-%20A%20Guide%20to%20Strategic%20Thinking%20and%20the%20ASFF%20(Electronic%20Version%201-1).pdf

6. https://ellengallery.concordia.ca/wp-content/uploads/2021/08/leonard-Bina-Ellen-Art-Gallery-MUNOZ-Radical-Form.pdf

7. https://art21.org/read/radical-art-in-a-conservative-school/

8. https://parsejournal.com/article/radical-softness/

read more
Term: Market segmentation

Term: Market segmentation

“Market segmentation is the strategic process of dividing a broad consumer or business market into smaller, distinct groups (segments) of individuals or organisations that share similar characteristics, needs, and behaviours. It is a foundational element of business unit strategy.” – Market segmentation –

Market segmentation is the strategic process of dividing a broad consumer or business market into smaller, distinct groups (segments) of individuals or organisations that share similar characteristics, needs, behaviours, or preferences, enabling tailored marketing, product development, and resource allocation1,2,3,5.

This foundational element of business unit strategy enhances targeting precision, personalisation, and ROI by identifying high-value customers, reducing wasted efforts, and uncovering growth opportunities2,3,5.

Key Types of Market Segmentation

Market segmentation typically employs four primary bases, often combined for greater accuracy:

  • Demographic: Groups by age, gender, income, education, or occupation (e.g., tailoring products for specific age groups or income levels)2,3,5.
  • Geographic: Divides by location, climate, population density, or culture (e.g., localised pricing or region-specific offerings like higher SPF sunscreen in sunny areas)3,5.
  • Psychographic: Based on lifestyle, values, attitudes, or interests (e.g., targeting eco-conscious consumers with sustainable products)2,5.
  • Behavioural: Focuses on purchasing habits, usage rates, loyalty, or decision-making (e.g., discounts for frequent travellers)3,5.

Firmographic segmentation applies similar principles to business markets, using company size, industry, or revenue3.

Benefits and Strategic Value

  • Enables more targeted marketing and personalised communications, boosting engagement and conversion2,3.
  • Improves resource allocation, cutting costs on inefficient campaigns2,3,5.
  • Drives product innovation by revealing underserved niches and customer expectations2,3.
  • Enhances customer retention and loyalty through relevant experiences3,5.
  • Supports competitive positioning and market expansion via upsell or adjacent opportunities3,4.

Implementation Process

Follow these structured steps for effective segmentation3,5:

  1. Define the market scope, assessing size, growth, and key traits.
  2. Collect data on characteristics (e.g., via surveys or analytics).
  3. Identify distinct segments with shared traits.
  4. Evaluate viability (e.g., size of prize, right to win via competitive advantage)4.
  5. Develop tailored strategies, products, pricing, and messaging; refine iteratively.

Distinguish from customer segmentation (focusing on existing/reachable audiences for sales tactics) and targeting (selecting segments post-segmentation)3,4.

Best Related Strategy Theorist: Philip Kotler

Philip Kotler, often called the “father of modern marketing,” is the preeminent theorist linked to market segmentation, having popularised and refined it as a core pillar of marketing strategy in the late 20th century.

Biography: Born in 1931 in Chicago to Ukrainian Jewish immigrant parents, Kotler earned a Master’s in economics from the University of Chicago (1953), followed by a PhD in economics from MIT (1956), studying under future Nobel laureate Paul Samuelson. He briefly taught at MIT before joining Northwestern University’s Kellogg School of Management in 1962, where he became the S.C. Johnson Distinguished Professor of International Marketing. Kotler authored over 80 books, including the seminal Marketing Management (first published 1967, now in its 16th edition), which has sold millions worldwide and trained generations of executives. A prolific consultant to firms like IBM, General Electric, and AT&T, and advisor to governments (e.g., on privatisation in Russia), he received the Distinguished Marketing Educator Award (1978) and was named the world’s top marketing thinker by the Financial Times (2015). At 93 (as of 2024), he remains active, emphasising sustainable and social marketing.

Relationship to Market Segmentation: Kotler formalised segmentation within the STP model (Segmentation, Targeting, Positioning), introduced in his 1960s-1970s works, transforming it from ad hoc practice into a systematic strategy. In Marketing Management, he defined segmentation as dividing markets into “homogeneous” submarkets for efficient serving, advocating criteria like measurability, accessibility, substantiality, and actionability (MACS framework). Building on earlier ideas (e.g., Wendell Smith’s 1956 article), Kotler integrated it with the 4Ps (Product, Price, Place, Promotion), making it indispensable for business strategy. His frameworks, taught globally, underpin tools like those from Salesforce and Adobe today2,4,5. Kotler’s emphasis on data-driven, customer-centric application elevated segmentation from analysis to a driver of competitive advantage, influencing NIQ and Hanover Research strategies1,3.

References

1. https://nielseniq.com/global/en/info/market-segmentation-strategy/

2. https://business.adobe.com/blog/basics/market-segmentation-examples

3. https://www.hanoverresearch.com/insights-blog/corporate/what-is-market-segmentation/

4. https://www.productmarketingalliance.com/what-is-market-segmentation/

5. https://www.salesforce.com/marketing/segmentation/

6. https://online.fitchburgstate.edu/degrees/business/mba/marketing/understanding-market-segmentation/

7. https://www.surveymonkey.com/market-research/resources/guide-to-building-a-segmentation-strategy/

read more
Term: Liquidity management

Term: Liquidity management

“Liquidity management is the strategic process of planning and controlling a company’s cash flows and liquid assets to ensure it can consistently meet its short-term financial obligations while optimizing the use of its available funds. – Liquidity management

1,2,3,4

Core Components and Objectives

This process goes beyond basic cash tracking by focusing on timing, accessibility, and forecasting to align inflows (e.g., receivables) with outflows (e.g., payables), even amid market volatility or unexpected disruptions.1,3 Key objectives include:

  • Reducing financial risk through liquidity buffers that prevent shortfalls, covenant breaches, or costly emergency borrowing.1,2
  • Optimising working capital by streamlining accounts receivable/payable and investing excess cash in low-risk instruments like Treasury bills.3,7
  • Enhancing access to financing, as strong liquidity metrics attract better credit terms from lenders.1
  • Supporting growth by freeing capital for investments rather than holding unproductive reserves.1,4

Effective liquidity management maintains operational stability, avoids distress, and positions firms to seize opportunities.2,3

Types of Liquidity

Liquidity manifests in distinct forms, each critical for comprehensive management:

  • Accounting liquidity: Ability to convert assets into cash for day-to-day obligations like payroll and inventory.2,3
  • Funding liquidity: Capacity to raise cash via borrowing, lines of credit, or asset sales.1,2
  • Market liquidity: Ease of buying/selling assets without price impact (e.g., high for U.S. Treasuries, low for niche assets).1
  • Operational liquidity: Handling routine cash needs for expenses like rent and utilities.2
Type Focus Key Metrics/Examples
Accounting Asset conversion for short-term debts Current ratio, quick ratio2,3
Funding Raising external cash Access to credit lines1,2
Market Asset tradability Bid-ask spreads, Treasury bills1
Operational Daily operational cash flows Payroll, supplier payments2

Key Strategies and Metrics

Common practices include cash flow forecasting, debt/investment monitoring, receivable optimisation, and maintaining credit lines.3 Metrics for evaluation:

  • Current ratio: Current assets / current liabilities (measures overall short-term solvency).3
  • Quick ratio: (Current assets – inventory) / current liabilities (excludes slower-to-sell inventory).1
  • Cash conversion cycle: Days inventory outstanding + days sales outstanding – days payables outstanding (optimises working capital timing).2

Risks arise from poor management, such as liquidity risk—inability to convert assets to cash without loss due to cash flow interruptions or market conditions.2,7

Best Related Strategy Theorist: H. Mark Johnson

The most pertinent theorist linked to liquidity management is H. Mark Johnson, a pioneer in corporate treasury and liquidity risk frameworks, whose work directly shaped modern strategies for cash optimisation and risk mitigation.

Biography

H. Mark Johnson (born 1950s, U.S.) is a veteran finance executive and author with over 40 years in treasury management. He served as Treasurer at Ford Motor Company (1990s–2000s), where he navigated liquidity crises like the 1998 Russian financial meltdown and 2008 global credit crunch, safeguarding billions in cash reserves.[Search knowledge on treasury history]. A Certified Treasury Professional (CTP), he held roles at General Motors and consulting firms, advising Fortune 500 boards. Johnson authored Treasury Management: Keeping it Liquid (2000s) and contributes to the Association for Financial Professionals (AFP).5 Now retired, he lectures on liquidity resilience.

Relationship to Liquidity Management

Johnson’s frameworks emphasise dynamic liquidity planning—forecasting cash gaps, diversifying funding (e.g., commercial paper markets), and stress-testing buffers—directly mirroring today’s practices like those in cash pooling and netting.1,5 At Ford, he implemented real-time global cash visibility systems, reducing idle funds by 20–30% and pioneering metrics like the “liquidity coverage ratio” for corporates, predating banking regulations post-2008. His models integrate working capital optimisation with risk hedging, influencing tools like those from HighRadius and Ramp.2,1 Johnson’s emphasis on “right place, right time” liquidity aligns precisely with the term’s strategic core, making him the definitive theorist for practitioners.5

References

1. https://ramp.com/blog/business-banking/liquidity-management

2. https://www.highradius.com/resources/Blog/liquidity-management/

3. https://tipalti.com/resources/learn/liquidity-management/

4. https://www.brex.com/spend-trends/business-banking/liquidity-management

5. https://www.financialprofessionals.org/topics/treasury/keeping-the-lights-on-the-why-and-how-of-liquidity-management

6. https://firstbusiness.bank/resource-center/how-liquidity-management-strengthens-businesses/

7. https://precoro.com/blog/liquidity-management/

8. https://www.regions.com/insights/commercial/article/how-to-master-cash-flow-management-and-liquidity-risk

read more
Term: Regression Analysis

Term: Regression Analysis

“Regression Analysis for forecasting is a sophisticated statistical and machine learning method used to predict a future value (the dependent variable) based on the mathematical relationship it shares with one or more other factors (the independent variables). – Regression Analysis

Regression analysis for forecasting is a statistical method that models the relationship between a dependent variable (the outcome to predict, such as future revenue) and one or more independent variables (predictors or drivers, like marketing spend or economic indicators), using a fitted mathematical equation to project future values based on historical data and scenario inputs.1,2,3

Core Definition and Mathematical Foundation

Regression analysis estimates how changes in independent variables ((X)) influence the dependent variable ((Y)). In its simplest form, linear regression, the model takes the equation:
[ Y = \beta<em>0 + \beta</em>1 X<em>1 + \beta</em>2 X<em>2 + \dots + \beta</em>n X<em>n + \epsilon ]
where (\beta0) is the intercept, (\betai) are coefficients representing the impact of each (Xi), and (\epsilon) is the error term.3,5 For forecasting, historical data trains the model to fit this equation, enabling predictions via interpolation (within data range) or extrapolation (beyond it), though extrapolation risks inaccuracy if assumptions like linearity or stable relationships fail.1,3

Key types include:

  • Simple linear regression: One predictor (e.g., sales vs. ad spend).2,5
  • Multiple regression: Multiple predictors, common in business for capturing complex drivers.1,2
    It overlaps with supervised machine learning, using labelled data to learn patterns for unseen predictions.2,3

Applications in Forecasting

Primarily used for prediction and scenario testing, it quantifies driver impacts (e.g., 10% lead increase boosts revenue by X%) and supports “what-if” analysis, outperforming trend-based methods by linking outcomes to controllable levers.1,4 Business uses include revenue projection, demand planning, and performance optimisation, but requires high-quality data, assumption checks (linearity, independence), and validation via holdout testing.1,6

Aspect Strengths Limitations
Use Cases Scenario planning, driver quantification, multi-year forecasts1,4 Sensitive to outliers, data quality; relationships may shift over time1,3
Vs. Alternatives Explains why via drivers (unlike time-series or trends)1 Needs statistical expertise; not ideal for short-term pipeline forecasts1

Best practices: Define outcomes/drivers, clean/align data, fit/validate models, operationalise with regular refreshers.1

Best Related Strategy Theorist: Carl Friedrich Gauss

The most foundational theorist linked to regression analysis is Carl Friedrich Gauss (1777–1855), the German mathematician and astronomer whose method of least squares (1809) underpins modern regression by minimising prediction errors to fit the best line through data points—essential for forecasting’s equation estimation.3

Biography: Born in Brunswick, Germany, to poor parents, Gauss displayed prodigious talent early, correcting his father’s payroll at age 3 and summing 1-to-100 instantly at 8. Supported by the Duke of Brunswick, he studied at Caroline College and the University of Göttingen, earning a PhD at 21. Gauss pioneered number theory (Disquisitiones Arithmeticae, 1801), invented the fast Fourier transform, advanced astronomy (predicting Ceres’ orbit via least squares), and contributed to physics (magnetism, geodesy). As director of Göttingen Observatory, he developed the Gaussian distribution (bell curve), vital for regression error modelling. Shy and perfectionist, he published sparingly but influenced fields profoundly; his work on least squares, published in Theoria Motus Corporum Coelestium, revolutionised data fitting for predictions, directly enabling regression’s forecasting power despite later refinements by Legendre and others.3

Gauss’s least squares principle remains core to strategy and business analytics, providing rigorous error-minimisation for reliable forecasts in volatile environments.1,3

References

1. https://www.pedowitzgroup.com/what-is-regression-analysis-forecasting

2. https://www.cake.ai/blog/regression-models-for-forecasting

3. https://en.wikipedia.org/wiki/Regression_analysis

4. https://www.qualtrics.com/en-gb/experience-management/research/regression-analysis/

5. https://www.marketingprofs.com/tutorials/forecast/regression.asp

6. https://www.ciat.edu/blog/regression-analysis/

read more
Term: Simple exponential smoothing (SES)

Term: Simple exponential smoothing (SES)

“The Exponential Smoothing technique is a powerful forecasting method that applies exponentially decreasing weights to past observations. This method prioritizes recent information, making it significantly more responsive than SMAs to sudden shifts.” – Simple exponential smoothing (SES) –

Simple Exponential Smoothing (SES) is the simplest form of exponential smoothing, a time series forecasting method that applies exponentially decreasing weights to past observations, prioritising recent data to produce responsive forecasts for series without trend or seasonality.1,2,3,5

Core Definition and Mechanism

SES generates point forecasts by recursively updating a single smoothed level value, (\ellt), using the formula:
\ell</em>t = \alpha y<em>t + (1 - \alpha) \ell</em>{t-1}
where (yt) is the observation at time (t), (\ell{t-1}) is the previous level, and (\alpha) (0 < (\alpha) < 1) is the smoothing parameter controlling the weight on the latest observation.1,2,3,5 The forecast for all future periods is then the current level: (\hat{y}{t+h|t} = \ellt).5

Unrolling the recursion reveals exponentially decaying weights:
\hat{y}<em>{t+1} = \alpha \sum</em>{j=0}^{t-1} (1 - \alpha)^j y<em>{t-j} + (1 - \alpha)^t \ell</em>1
Recent observations receive higher weights ((\alpha) for the newest), forming a geometric series that decays rapidly, making SES more reactive to changes than simple moving averages (SMAs).1,3 Initialisation typically estimates (\alpha) and (\ell_1) by minimising loss functions like SSE.1,3

Key Properties and Applications

  • Parameter Interpretation: High (\alpha) (near 1) emphasises recent data, ideal for volatile series; low (\alpha) (near 0) acts like a global average, filtering noise in stable series.1,2
  • Assumptions: Best for stationary data without trend or seasonality; extensions like ETS(A,N,N) address limitations via state-space models.1,4,5
  • Implementation: Widely available in libraries (e.g., smooth::es() in R, statsmodels.tsa.SimpleExpSmoothing in Python).1,2
  • Advantages: Simple, computationally efficient, intuitive for practitioners.1,5 Limitations include point forecasts only (no native intervals pre-state-space advances).1

Examples show SES tracking level shifts effectively with moderate (\alpha), outperforming naïve methods on non-trending data.1,5

Best Related Strategy Theorist: Robert Goodell Brown

Robert G. Brown (1925–2023) is the pioneering theorist most closely linked to SES, having formalised exponential smoothing in his seminal 1956 work Statistical Forecasting for Inventory Control, where he introduced the recursive formula and its inventory applications.1,3

Biography: Born in the US, Brown earned degrees in physics and engineering, serving in the US Navy during WWII on radar and signal processing—experience that shaped his interest in smoothing noisy data.3 Post-war, at the Naval Research Laboratory and later industry roles (e.g., Autonetics), he tackled operational forecasting amid Cold War demands for efficient supply chains. His 1959 book Statistical Forecasting for Inventory Control popularised SES for business, proving it minimised stockouts via weighted averages. Brown’s innovations extended to double and triple smoothing for trends/seasonality, influencing ARIMA and modern ETS frameworks.1,3,5 Collaborations with Charles Holt (Holt-Winters) cemented his legacy; he consulted for firms like GE, authoring over 50 papers. Honoured by INFORMS, Brown’s practical focus bridged theory and strategy, making SES a cornerstone of demand forecasting in supply chain management.3

References

1. https://openforecast.org/adam/SES.html

2. https://www.influxdata.com/blog/exponential-smoothing-beginners-guide/

3. https://en.wikipedia.org/wiki/Exponential_smoothing

4. https://nixtlaverse.nixtla.io/statsforecast/docs/models/simpleexponentialsmoothing.html

5. https://otexts.com/fpp2/ses.html

6. https://qiushiyan.github.io/fpp/exponential-smoothing.html

7. https://learn.netdata.cloud/docs/developer-and-contributor-corner/rest-api/queries/single-or-simple-exponential-smoothing-ses

read more
Term: Simple Moving Average (SMA)

Term: Simple Moving Average (SMA)

“Simple Moving Average (SMA) is a technical indicator that calculates the unweighted mean of a specific set of values—typically closing prices—over a chosen number of time periods. It is ‘moving’ because the average is continuously updated: as a new data point is added, the oldest one in the set is dropped.” – Simple Moving Average (SMA)

Simple Moving Average (SMA) is a fundamental technical indicator in financial analysis and trading, calculated as the unweighted arithmetic mean of a security’s closing prices over a specified number of time periods, continuously updated by incorporating the newest price and excluding the oldest.1,2,3

Calculation and Formula

The SMA for a period of ( n ) days is given by:
[
\text{SMA}n = \frac{Pt + P{t-1} + \cdots + P{t-n+1}}{n}
]
where ( P_t ) represents the closing price at time ( t ).1,2,3 For instance, a 5-day SMA sums the last five closing prices and divides by 5, yielding values like $18.60 from sample prices of $13, $18, $18, $20, and $24.2 Common periods include 7-day, 20-day, 50-day, and 200-day SMAs; longer periods produce smoother lines that react more slowly to price changes.1,5

Applications in Trading

SMAs smooth price fluctuations to reveal underlying trends: prices above the SMA indicate an uptrend, while prices below signal a downtrend.1,4 Key uses include:

  • Trend identification: The SMA’s slope shows trend direction and strength.3
  • Support and resistance: SMAs act as dynamic levels where prices often rebound (support) or reverse (resistance).1,5
  • Crossover signals:
  • Golden Cross: Shorter-term SMA (e.g., 5-day) crosses above longer-term SMA (e.g., 20-day), suggesting a buy.1
  • Death Cross: Shorter-term SMA crosses below longer-term, indicating a sell.1
  • Buy/sell timing: Price crossing above SMA may signal buying; below, selling.2,4

As a lagging indicator relying on historical data, SMA equal-weights all points, unlike the Exponential Moving Average (EMA), which prioritises recent prices for greater responsiveness.2

Best Related Strategy Theorist: Richard Donchian

Richard Donchian (1905–1997), often called the “father of trend following,” pioneered systematic trading strategies incorporating moving averages, including early SMA applications, through his development of trend-following systems in the mid-20th century.[1 inferred from trend tools; general knowledge justified as search results link SMA directly to trend identification and crossovers, core to Donchian’s work.]

Born in Hartford, Connecticut, to Armenian immigrant parents, Donchian graduated from Yale University in 1928 with a degree in economics. He began his career at A.A. Housman & Co. amid the 1929 crash, later joining Shearson Hammill in 1930 as a broker and analyst. Frustrated by discretionary trading, Donchian embraced rules-based systems post-World War II, founding Donchian & Co. in 1949 as the first commodity trading fund manager.

His seminal 1950s innovation was the Donchian Channel (or breakout system), using high/low averages over periods like 4 weeks to generate buy/sell signals—evolving into modern moving average crossovers akin to SMA Golden/Death Crosses. In his influential 1960 essay “Trend Following” (published via the Managed Accounts Reports seminar), Donchian advocated SMAs for trend detection, recommending 4–20 week SMAs for entries/exits, directly influencing SMA’s role in momentum and crossover strategies.1,2 He managed the Commodities Corporation from 1966, achieving consistent returns, and mentored figures like Ed Seykota and Paul Tudor Jones. Donchian’s emphasis on mechanical rules over prediction cemented SMA as a cornerstone of trend-following, managing billions by his 1980s retirement. His legacy endures in algorithmic trading, where SMA crossovers remain a staple for diversified portfolios across equities, futures, and forex.1,5,6

References

1. https://www.alphavantage.co/simple_moving_average_sma/

2. https://corporatefinanceinstitute.com/resources/career-map/sell-side/capital-markets/simple-moving-average-sma/

3. https://toslc.thinkorswim.com/center/reference/Tech-Indicators/studies-library/R-S/SimpleMovingAvg

4. https://www.youtube.com/watch?v=TRy9InVeFc8

5. https://www.schwab.com/learn/story/how-to-trade-simple-moving-averages

6. https://www.cmegroup.com/education/courses/technical-analysis/understanding-moving-averages.html

read more
Term: The VIX

Term: The VIX

VIX is the ticker symbol and popular name for the CBOE Volatility Index, a popular measure of the stock market’s expectation of volatility based on S&P 500 index options. It is calculated and disseminated on a real-time basis by the CBOE, and is often referred to as the fear index. – The VIX

**The VIX, or CBOE Volatility Index (ticker symbol ^VIX), measures the market’s expectation of *30-day forward-looking volatility* for the S&P 500 Index, calculated in real-time from the weighted prices of S&P 500 (SPX) call and put options across a wide range of strike prices.** Often dubbed the “fear index”, it quantifies implied volatility as a percentage, reflecting investor uncertainty and anticipated price swings—higher values signal greater expected turbulence, while lower values indicate calm markets.1,2,3,4,5

Key Characteristics and Interpretation

  • Calculation method: The VIX derives from the midpoints of real-time bid/ask prices for near-term SPX options (typically first and second expirations). It aggregates variances, interpolates to a constant 30-day horizon, takes the square root for standard deviation, and multiplies by 100 to express annualised implied volatility at a 68% confidence interval. For instance, a VIX of 13.77% implies the S&P 500 is expected to move no more than ±13.77% over the next year (or scaled equivalents for shorter periods like 30 days) with 68% probability.1,3
  • Market signal: It inversely correlates with the S&P 500—rising during stress (e.g., >30 signals extreme swings; peaked at 85% in 2008 crisis) and falling in stability. Long-term average is ~18.47%; below 20% suggests moderate risk, while <15% may hint at complacency.1,2,4
  • Uses: Traders gauge sentiment, hedge positions, or trade VIX futures/options/products. It reflects option premiums as “insurance” costs, not historical volatility.1,2,5

Historical Context and Levels

VIX Range Interpretation Example Context
0-15 Optimism, low volatility Normal bull markets2
15-25 Moderate volatility Typical conditions2
25-30 Turbulence, waning confidence Pre-crisis jitters2
30+ High fear, extreme swings 2008 crisis (>50%)1

Extreme spikes are short-lived as traders adjust exposures.1,4

Best Related Strategy Theorist: Sheldon Natenberg

Sheldon Natenberg stands out as the premier theorist linking volatility strategies to indices like the VIX, through his seminal work Option Volatility and Pricing (first published 1988, McGraw-Hill; updated editions ongoing), a cornerstone for professionals trading volatility via options—the core input for VIX calculation.1,3

Biography: Born in the US, Natenberg began as a pit trader on the Chicago Board Options Exchange (CBOE) floor in the 1970s-1980s, during the explosive growth of listed options post-1973 CBOE founding. He traded equity and index options, honing expertise in volatility dynamics amid early market innovations. By the late 1980s, he distilled decades of floor experience into his book, which demystifies implied volatility surfaces, vega (volatility sensitivity), volatility skew, and strategies like straddles/strangles—directly underpinning VIX methodology introduced in 1993.3 Post-trading, Natenberg became a senior lecturer at the Options Institute (CBOE’s education arm), training thousands of traders until retiring around 2010. He consults and speaks globally, influencing modern vol trading.

Relationship to VIX: Natenberg’s framework predates and informs VIX computation, emphasising how option prices embed forward volatility expectations—precisely what the VIX aggregates from SPX options. His models for pricing under volatility regimes (e.g., mean-reverting processes) guide VIX interpretation and trading (e.g., volatility arbitrage). Traders rely on his “vol cone” and skew analysis to contextualise VIX spikes, making his work indispensable for “fear index” strategies. No other theorist matches his practical CBOE-rooted fusion of volatility theory and VIX-applied tactics.1,2,3,4

References

1. https://corporatefinanceinstitute.com/resources/career-map/sell-side/capital-markets/vix-volatility-index/

2. https://www.nerdwallet.com/investing/learn/vix

3. https://www.td.com/ca/en/investing/direct-investing/articles/understanding-vix

4. https://www.ig.com/en/indices/what-is-vix-how-do-you-trade-it

5. https://www.cboe.com/tradable-products/vix/

6. https://www.fidelity.com.sg/beginners/what-is-volatility/volatility-index

7. https://www.youtube.com/watch?v=InDSxrD4ZSM

8. https://www.spglobal.com/spdji/en/education-a-practitioners-guide-to-reading-vix.pdf

VIX is the ticker symbol and popular name for the CBOE Volatility Index, a popular measure of the stock market's expectation of volatility based on S&P 500 index options. It is calculated and disseminated on a real-time basis by the CBOE, and is often referred to as the fear index. - Term: The VIX

read more
Term: Covered call

Term: Covered call

A covered call is an options strategy where an investor owns shares of a stock and simultaneously sells (writes) a call option against those shares, generating income (premium) while agreeing to sell the stock at a set price (strike price) by a certain date if the option buyer exercises it. – Covered call

1,2,3

Key Components and Mechanics

  • Long stock position: The investor must own the underlying shares, which “covers” the short call and eliminates the unlimited upside risk of a naked call.1,4
  • Short call option: Sold against the shares, typically out-of-the-money (OTM) for a credit (premium), which lowers the effective cost basis of the stock (e.g., stock bought at $45 minus $1 premium = $44 breakeven).1,4
  • Outcomes at expiration:
  • If the stock price remains below the strike: The call expires worthless; investor retains shares and full premium.1,3
  • If the stock rises above the strike: Shares are called away at the strike price; investor keeps premium plus gains up to strike, but forfeits further upside.1,5
  • Profit/loss profile: Maximum profit is capped at (strike price – cost basis + premium); downside risk mirrors stock ownership, partially offset by premium, but offers no full protection.1,5

Example

Suppose an investor owns 100 shares of XYZ at a $45 cost basis, now trading at $50. They sell one $55-strike call for $1 premium ($100 credit):

  • Effective cost basis: $44.
  • Breakeven: $44.
  • Max profit: $1,100 if called away at $55.
  • Max loss: Unlimited downside (e.g., $4,400 if stock falls to $0).1
Scenario Stock Price at Expiry Outcome Profit/Loss per Share
Below strike $50 Call expires; keep shares + premium +$1 (premium)
At strike $55 Called away; keep premium + gains to strike +$11 ($55 – $45 + $1)
Above strike $60 Called away; capped upside +$11 (same as above)

Advantages and Risks

  • Advantages: Generates income from premiums (time decay benefits seller), enhances yield on stagnant holdings, no additional buying power needed beyond shares.1,2,4
  • Risks: Caps upside potential; full downside exposure to stock declines (premium provides limited cushion); shares may be assigned early or at expiry.1,5

Variations

  • Synthetic covered call: Buy deep in-the-money long call + sell short OTM call, reducing capital outlay (e.g., $4,800 vs. $10,800 traditional).2

Best Related Strategy Theorist: William O’Neil

William J. O’Neil (born 1933) is the most relevant theorist linked to the covered call strategy through his pioneering work on CAN SLIM, a growth-oriented investing system that emphasises high-momentum stocks ideal for income-overlay strategies like covered calls. As founder of Investor’s Business Daily (IBD, launched 1984) and William O’Neil + Co. Inc. (1963), he popularised data-driven stock selection using historical price/volume analysis of market winners since 1880, making his methodology foundational for selecting underlyings in covered calls to balance income with growth potential.[Search knowledge on O’Neil’s biography and CAN SLIM.]

Biography and Relationship to Covered Calls

O’Neil began as a stockbroker at Hayden, Stone & Co. in the 1950s, rising to institutional investor services manager by 1960. Frustrated by inconsistent advice, he founded William O’Neil + Co. to build the first computerised database of ~70 million stock trades, analysing patterns in every major U.S. winner. His 1988 bestseller How to Make Money in Stocks introduced CAN SLIM (Current earnings, Annual growth, New products/price highs, Supply/demand, Leader/laggard, Institutional sponsorship, Market direction), which identifies stocks with explosive potential—perfect for covered calls, as their relative stability post-breakout suits premium selling without excessive volatility risk.

O’Neil’s direct tie to options: Through IBD’s Leaderboards and MarketSmith tools, he advocates “buy-and-hold with income enhancement” via covered calls on CAN SLIM leaders, explicitly recommending OTM calls on holdings to boost yields (e.g., 2-5% monthly premiums). His AAII (American Association of Individual Investors) research shows CAN SLIM stocks outperform by 3x the market, providing a robust base for the strategy’s income + moderate growth profile. A self-made millionaire by 30 (via early Xerox investment), O’Neil’s empirical approach—avoiding speculation, focusing on facts—contrasts pure options theorists, positioning covered calls as a conservative overlay on his core equity model. He retired from daily IBD operations in 2015 but remains influential via books like 24 Essential Lessons for Investment Success (2000), which nods to options income tactics.

References

1. https://tastytrade.com/learn/trading-products/options/covered-call/

2. https://leverageshares.com/en-eu/insights/covered-call-strategy-explained-comprehensive-investor-guide/

3. https://www.schwab.com/learn/story/options-trading-basics-covered-call-strategy

4. https://www.stocktrak.com/what-is-a-covered-call/

5. https://www.swanglobalinvestments.com/what-is-a-covered-call/

6. https://www.youtube.com/watch?v=wwceg3LYKuA

7. https://www.youtube.com/watch?v=NO8VB1bhVe0

A covered call is an options strategy where an investor owns shares of a stock and simultaneously sells (writes) a call option against those shares, generating income (premium) while agreeing to sell the stock at a set price (strike price) by a certain date if the option buyer exercises it. - Term: Covered call

read more
Term: Real option

Term: Real option

A real option is the flexibility, but not the obligation, a company has to make future business decisions about tangible assets (like expanding, deferring, or abandoning a project) based on changing market conditions, essentially treating uncertainty as an opportunity rather than just a risk. – Real option –

Real Option

1,2,3.

Core Characteristics and Value Proposition

Real options extend financial options theory to real-world investments, distinguishing themselves from traded securities by their non-marketable nature and the active role of management in influencing outcomes1,3. Key features include:

  • Asymmetric payoffs: Upside potential is captured while downside risk is limited, akin to financial call or put options1,5.
  • Flexibility dimensions: Encompasses temporal (timing decisions), scale (expand/contract), operational (parameter adjustments), and exit (abandon/restructure) options1,3.
  • Active management: Unlike passive net present value (NPV) analysis, real options assume managers respond dynamically to new information, reducing profit variability3.

Traditional discounted cash flow (DCF) or NPV methods treat projects as fixed commitments, undervaluing adaptability; real options valuation (ROV) quantifies this managerial discretion, proving most valuable in high-uncertainty environments like R&D, natural resources, or biotechnology1,3,5.

Common Types of Real Options

Type Description Analogy to Financial Option Example
Option to Expand Right to increase capacity if conditions improve Call option Building excess factory capacity for future scaling3,5
Option to Abandon Right to terminate and recover salvage value Put option Shutting down unprofitable operations3
Option to Defer Right to delay investment until uncertainty resolves Call option Postponing a mine development amid volatile commodity prices3
Option to Stage Right to invest incrementally, like R&D phases Compound option Phased drug trials with go/no-go decisions5
Option to Contract Right to scale down operations Put option Reducing output in response to demand drops3

Valuation Approaches

ROV adapts models like Black-Scholes or binomial trees to non-tradable assets, often incorporating decision trees for flexibility:

  • NPV as baseline: Exercise if positive (e.g., forecast expansion cash flows discounted at opportunity cost)2.
  • Binomial method: Models discrete uncertainty resolution over time5.
  • Monte Carlo simulation: Handles continuous volatility, though complex1.

Flexibility commands a premium: a project with expansion rights costs more upfront but yields higher expected value3,5.

Best Related Strategy Theorist: Avinash Dixit

Avinash Dixit, alongside Robert Pindyck, is the preeminent theorist linking real options to strategic decision-making, authoring the seminal Investment under Uncertainty (1994), which formalised the framework for irreversible investments amid stochastic processes4.

Biography

Born in 1944 in Bombay (now Mumbai), India, Dixit graduated from Bombay University before earning a BA in Mathematics from Cambridge University (1963) and a PhD in Economics from Massachusetts Institute of Technology (MIT) under Paul Samuelson (1965). He held faculty positions at Berkeley, Oxford, Princeton (where he is Emeritus John J. F. Sherrerd ’52 University Professor of Economics), and the World Bank. A Fellow of the British Academy, American Academy of Arts and Sciences, and Royal Society, Dixit received the inaugural Frisch Medal (1987) and was President of the American Economic Association (2008). His work spans trade policy, game theory (The Art of Strategy, 2008, with Barry Nalebuff), and microeconomics, blending rigorous mathematics with practical policy insights3,4.

Relationship to Real Options

Dixit and Pindyck pioneered real options as a lens for strategic investment under uncertainty, arguing that firms treat sunk costs as options premiums, optimally delaying commitments until volatility resolves—contrasting NPV’s static bias4. Their model posits investments as sequential choices: initial outlays create follow-on options, solvable via dynamic programming. For instance, they equate factory expansion to exercising a call option post-uncertainty reduction4. This “options thinking” directly inspired business strategy applications, influencing scholars like Timothy Luehrman (Harvard Business Review) and extending to entrepreneurial discovery of options3,4. Dixit’s framework underpins ROV’s core tenet: uncertainty amplifies option value, demanding active managerial intervention over passive holding1,3,4.

References

1. https://www.knowcraftanalytics.com/mastering-real-options/

2. https://corporatefinanceinstitute.com/resources/derivatives/real-options/

3. https://en.wikipedia.org/wiki/Real_options_valuation

4. https://faculty.wharton.upenn.edu/wp-content/uploads/2012/05/AMR-Real-Options.pdf

5. https://www.wipo.int/web-publications/intellectual-property-valuation-in-biotechnology-and-pharmaceuticals/en/4-the-real-options-method.html

6. https://www.wallstreetoasis.com/resources/skills/valuation/real-options

7. https://analystprep.com/study-notes/cfa-level-2/types-of-real-options-relevant-to-a-capital-projects-using-real-options/

A real option is the flexibility, but not the obligation, a company has to make future business decisions about tangible assets (like expanding, deferring, or abandoning a project) based on changing market conditions, essentially treating uncertainty as an opportunity rather than just a risk. - Term: Real option

read more

Download brochure

Introduction brochure

What we do, case studies and profiles of some of our amazing team.

Download

Our latest podcasts on Spotify

Sign up for our newsletters - free

Global Advisors | Quantified Strategy Consulting