Select Page

Global Advisors | Quantified Strategy Consulting

Link from bio
Quote: Pitchbook

Quote: Pitchbook

“In an effort to satisfy their investors’ thirst for distributions, some [PE] fund managers are selling their crown jewels now, even if it means giving up potential returns.” – Pitchbook –

Private equity (PE) fund managers are increasingly selling high-value “crown jewel” assets prematurely to meet investor demands for cash distributions amid a prolonged liquidity crunch, potentially sacrificing long-term upside.1,2

Context of the Quote

This observation from Pitchbook captures a core tension in the PE landscape as of late 2025, where general partners (GPs) face mounting pressure from limited partners (LPs) to return capital after years of subdued exits. Deal values reached $2.3 trillion by November 2025, on pace for the strongest year since 2021, yet distributions remain in a four-year drought extending into 2026.1,2 GPs are resorting to tools like continuation vehicles (CVs)—which now account for at least 20% of distributions as LPs opt to sell rather than roll—secondaries sales, NAV lending, and portfolio stake sales to manufacture liquidity.1,2,3 High-quality assets command premiums, skewing transaction stats upward, but GPs accept 11-20% discounts on long-held holdings to facilitate sales, especially for lower-quality or earlier investments retained post-2021.4 This “distribution drought” stems from a backlog of long-hold companies, valuation gaps, leverage constraints, and competition from patient capital like sovereign wealth funds and family offices, forcing even top assets out the door despite growth potential.3,4,6,7

Dry powder stands at $880 billion (US PE) to over $2.5 trillion globally, but deployment favors creative structures like carve-outs, take-privates, and evergreens—projected to hold 20% of private market capital within a decade—over traditional buyouts.1,3,6 Exits via IPOs and M&A are rebounding (volumes up 43% YoY), but remain muted relative to net asset values, with GPs prioritizing LP satisfaction over holding for peak returns.4,5 Middle-market firms, in particular, adopt cautious risk appetites, extending diligence and avoiding overpayment in a sellers’ market for quality deals.6

Backstory on Pitchbook

Pitchbook, the source of this quote, is a leading data and research provider on private capital markets, founded in 2007 and acquired by Morningstar in 2016. It tracks over 3 million companies, 2 million funds, and trillions in deal flow, offering benchmarks, valuations, and investor insights drawn from proprietary databases. Known for its rigorous analysis of PE trends—like liquidity pressures and GP-LP dynamics—Pitchbook’s reports influence institutional allocators and GPs. This quote likely emerges from their 2025-2026 market commentary, aligning with surveys showing GPs willing to discount assets to unlock cash amid LP impatience.4

Leading Theorists and Theorists on PE Liquidity and Distributions

The quote ties into foundational and contemporary theories on agency problems in PE (GPs vs. LPs misaligned incentives) and liquidity transformation in illiquid assets. Key figures include:

read more
Quote: Associated Press – On AI shopping

Quote: Associated Press – On AI shopping

“Google, OpenAI and Amazon all are racing to create tools that would allow for seamless AI-powered shopping.” – Associated Press

When the Associated Press observes that “Google, OpenAI and Amazon all are racing to create tools that would allow for seamless AI-powered shopping”, it is capturing a pivotal moment in the evolution of retail and of the internet itself. The quote sits at the intersection of several long-running trends: the shift from search to conversation, from static websites to intelligent agents, and from one-size-fits-all retail to deeply personalised, data-driven commerce.

Behind this single sentence lies a complex story of technological breakthroughs, strategic rivalry between the worlds largest technology platforms, and a reimagining of how people discover, evaluate and buy what they need. It also reflects the culmination of decades of research in artificial intelligence, recommendation systems, human-computer interaction and digital economics.

The immediate context: AI agents meet the shopping basket

The Associated Press line comes against the backdrop of a wave of partnerships between AI platforms and major retailers. Google has been integrating its Gemini AI assistant with large retail partners such as Walmart and Sams Club, allowing users to move from a conversational query directly to tailored product recommendations and frictionless checkout.

Instead of typing a product name into a search bar, a shopper can describe a situation or a goal, such as planning a camping trip or furnishing a first flat. Gemini then uses natural language understanding and retailer catalogues to surface relevant items, combine them into coherent baskets and arrange rapid delivery, in some cases within hours.1,3 The experience is meant to feel less like using a website and more like speaking to a highly knowledgeable personal shopper.

Walmart leaders have described this shift as a move from traditional search-based ecommerce to what they call “agent-led commerce” – shopping journeys mediated not by menus and filters but by AI agents that understand intent, context and personal history.1,2,3 For Google, this integration is both a way to showcase the capabilities of its Gemini models and a strategic response to OpenAIs work with retailers like Walmart, Etsy and a wide range of Shopify merchants through tools such as Instant Checkout.2,3

OpenAI, in parallel, has enabled users to browse and buy directly within ChatGPT, turning the chatbot into a commercial surface as well as an information tool.2,3 Amazon, for its part, has been weaving generative AI into its core marketplace, logistics and voice assistant, using AI models to improve product discovery, summarise reviews, optimise pricing and automate seller operations. Each company is betting that the next era of retail will be shaped by AI agents that can orchestrate entire end-to-end journeys from inspiration to doorstep.

From web search to agentic commerce

The core idea behind “seamless AI-powered shopping” is the replacement of fragmented, multi-step customer journeys with coherent, adaptive experiences guided by AI agents. Historically, online shopping has been built around search boxes, category trees and static product pages. The burden has been on the consumer to know what they want, translate that into search terms, sift through results and manually assemble baskets.

Agentic commerce reverses this burden. The AI system becomes an active participant: interpreting vague goals, proposing options, remembering preferences, coordinating logistics and handling payments, often across multiple merchants. Google and OpenAI have both underpinned their efforts with new open protocols designed to let AI agents communicate with a wide ecosystem of retailers, payment providers and loyalty systems.3,5

Google refers to its initiative as a Universal Commerce Protocol and describes it as a new standard that allows agents and systems to talk to each other across each step of the shopping journey.3,5 OpenAI, in turn, introduced the Agentic Commerce Protocol in partnership with Stripe, enabling ChatGPT and other agents to complete purchases from Etsy and millions of Shopify merchants.3 The technical details differ, but the strategic goal is shared: create an infrastructure layer that allows any capable AI agent to act as a universal shopping front end.

In practice, this means that a single conversation might involve discovering a new product, joining a retailers loyalty scheme, receiving personalised offers, adding related items and completing payment – without ever visiting a conventional website or app. The Associated Press quote calls out the intensity of the competition between the major platforms to control this new terrain.

The Associated Press as observer and interpreter

The Associated Press (AP), the attributed source of the quote, has a distinctive role in this story. Founded in 1846, AP is one of the worlds oldest and most widely used news agencies. It operates as a non-profit cooperative, producing reporting that is syndicated globally and used as a baseline for coverage by broadcasters, newspapers and digital platforms.

AP has long been known for its emphasis on factual, neutral reporting, and over the past decade it has also become notable for its early adoption of AI in news production. It has experimented with automated generation of corporate earnings summaries, sports briefs and other data-heavy stories, while also engaging in partnerships with technology companies around synthetic media and content labelling.

By framing the competition between Google, OpenAI and Amazon as a “race” to build seamless AI shopping, AP is doing more than simply documenting product launches. It is drawing attention to the structural stakes: the question of who will mediate the everyday economic decisions of billions of people. APs wording underscores both the speed of innovation and the concentration of power in a handful of technology giants.

APs technology and business correspondents, in covering this domain, typically triangulate between company announcements, analyst commentary and academic work on AI and markets. The quote reflects that blend: it is rooted in concrete developments such as the integration of Gemini with major retailers and the emergence of new commerce protocols, but it also hints at broader theoretical debates about platforms, data and consumer autonomy.

Intellectual roots: from recommendation engines to intelligent agents

The idea of seamless, AI-mediated shopping is the visible tip of an intellectual iceberg that stretches back decades. Several overlapping fields contribute to the current moment: information retrieval, recommender systems, multi-sided platforms, behavioural economics and conversational AI. The leading theorists in these areas laid the groundwork for the systems now shaping retail.

Search and information retrieval

Long before conversational agents, the central challenge of online commerce was helping people find relevant items within vast catalogues. Researchers in information retrieval, such as Gerard Salton in the 1960s and 1970s, developed foundational models for document ranking and term weighting that later underpinned web search.

In the context of commerce, the key innovation was the integration of relevance ranking with commercial signals such as click-through rates, purchase behaviour and sponsored listings. Googles original PageRank algorithm, associated with Larry Page and Sergey Brin, revolutionised how information was organised on the web and provided the basis for search advertising – itself a driver of modern retail. As search became the dominant gateway to online shopping, the line between information retrieval and marketing blurred.

The move to AI-powered shopping agents extends this lineage. Instead of ranking static pages, large language models interpret natural language queries, generate synthetic descriptions and orchestrate actions such as adding items to a basket. The theoretical challenge shifts from simply retrieving documents to modelling context, intent and dialogue.

Recommender systems and personalisation

Much of seamless AI-powered shopping depends on the ability to personalise offers and predict what a particular consumer is likely to want. This traces back to work on recommender systems in the 1990s and 2000s. Pioneers such as John Riedl and Joseph Konstan developed early collaborative filtering systems that analysed user ratings to make personalised suggestions.

The famous Netflix Prize in the mid-2000s catalysed work on matrix factorisation and latent factor models, with researchers like Yehuda Koren demonstrating how to predict preferences from sparse interaction data. Amazon itself became synonymous with recommender systems, popularising the idea that “customers who bought this also bought” could drive significant incremental revenue.

Over time, recommendation theory has expanded to consider not just accuracy but diversity, serendipity and fairness. Work by researchers such as Gediminas Adomavicius and Alexander Tuzhilin analysed trade-offs between competing objectives in recommender systems, while others explored issues of filter bubbles and echo chambers.

In AI-powered shopping, these theoretical concerns are amplified. When a single conversational agent mediates choices across many domains, its recommendation logic effectively becomes a form of personalised market design. It can nudge users towards particular brands, balance commercial incentives with user welfare, and shape long-term consumption habits. The underlying theories of collaborative filtering, contextual bandits and reinforcement learning now operate in a more visible, consequential arena.

Multi-sided platforms and the economics of marketplaces

The race between Google, OpenAI and Amazon is also a contest between different platform models. Economists such as Jean-Charles Rochet and Jean Tirole provided the canonical analysis of multi-sided platforms – markets where intermediaries connect distinct groups of users, such as buyers and sellers, advertisers and viewers.

The theory of platform competition explains why network effects and data accumulation can produce powerful incumbents, and why controlling the interface through which users access multiple services confers strategic advantages. Amazon Marketplace, Google Shopping and ad networks, and now AI agents embedded in operating systems or browsers, can all be seen through this lens.

Further work by David Evans, Andrei Hagiu and others explored platform governance, pricing structures and the strategic choice between being a neutral intermediary or a competitor to ones own participants. These ideas are highly relevant when AI agents choose which merchants or products to recommend and on what terms.

Seamless AI shopping turns the agent itself into a platform. It connects consumers, retailers, payment services, logistics providers and loyalty schemes through a conversational interface. The Universal Commerce Protocol and the Agentic Commerce Protocol can be understood as attempts to standardise interactions within this multi-sided ecosystem.3,5 The underlying tensions – between openness and control, neutrality and self-preferencing – are illuminated by platform economics.

Behavioural economics, choice architecture and digital nudging

While traditional economics often assumes rational agents and transparent markets, the reality of digital commerce has always been shaped by design: the ordering of search results, the framing of options, the use of defaults, and the timing of prompts. Behavioural economists like Daniel Kahneman, Amos Tversky and Richard Thaler have demonstrated how real-world decision-making deviates from rational models and how “choice architecture” can influence outcomes.

In online retail, this has manifested as a rich literature on digital nudging: subtle interface choices that steer behaviour. Researchers in human-computer interaction and behavioural science have documented how factors such as social proof, scarcity cues and personalised messaging affect conversion.

AI-powered shopping agents add another layer. Instead of static designs, the conversation itself becomes the choice architecture. The way an AI agent frames options, in what order it presents them, how it responds to hesitation and how it explains trade-offs, all shape consumer welfare. Theorists working at the intersection of AI and behavioural economics are now grappling with questions of transparency, autonomy and manipulation in agentic environments.

Conversational AI and human-computer interaction

The ability to shop by talking to an AI depends on advances in natural language processing, dialogue modelling and user-centred design. The early work of Joseph Weizenbaum (ELIZA) and the subsequent development of chatbots provided the conceptual foundations, but the major leap came with deep learning and large language models.

Researchers such as Yoshua Bengio, Geoffrey Hinton and Yann LeCun advanced the neural network architectures that underpin todays generative models. Within natural language processing, work by many teams on sequence-to-sequence learning, attention mechanisms and transformer architectures led to systems capable of understanding and generating human-like text.

OpenAI popularised the transformer-based large language model with the GPT series, while Google researchers contributed foundational work on transformers and later developed models like BERT and its successors. These advances turned language interfaces from novelties into robust tools capable of handling complex, multi-turn interactions.

Human-computer interaction specialists, meanwhile, studied how people form mental models of conversational agents, how trust is built or undermined, and how to design dialogues that feel helpful rather than intrusive. The combination of technical capability and design insight has made it plausible for people to rely on an AI agent to curate shopping choices.

Autonomous agents and “agentic” AI

The term “agentic commerce” used by Walmart and Google points to a broader intellectual shift: viewing AI systems not just as passive tools but as agents capable of planning and executing sequences of actions.1,5 In classical AI, agent theory has its roots in work on autonomous systems, reinforcement learning and decision-making under uncertainty.

Reinforcement learning theorists such as Richard Sutton and Andrew Barto formalised the idea of an agent learning to act in an environment to maximise reward. In ecommerce, this can translate into systems that learn how best to present options, when to offer discounts or how to balance immediate sales with long-term customer satisfaction.

Recent research on tool-using agents goes further, allowing language models to call external APIs, interact with databases and coordinate services. In commerce settings, that means an AI can check inventory, query shipping options, apply loyalty benefits and complete payments – all within a unified reasoning loop. Googles and OpenAIs protocols effectively define the “environment” in which such agents operate and the “tools” they can use.3,5

The theoretical questions now concern safety, alignment and control: how to ensure that commercially motivated agents act in ways that are consistent with user interests and regulatory frameworks, and how to audit their behaviour when their decision-making is both data-driven and opaque.

Corporate protagonists: Google, OpenAI and Amazon

The Associated Press quote names three central actors, each with a distinct history and strategic posture.

Google: from search to Gemini-powered commerce

Google built its business on organising the worlds information and selling targeted advertising against search queries. Its dominance in web search made it the default starting point for many online shopping journeys. As user behaviour has shifted towards conversational interfaces and specialised shopping experiences, Google has sought to extend its role from search engine to AI companion.

Gemini, Googles family of large language models and AI assistants, sits at the heart of this effort. By integrating Gemini into retail scenarios, Google is attempting to ensure that when people ask an AI for help – planning a project, solving a problem or buying a product – it is their agent, not a competitors, that orchestrates the journey.1,3,5

Partnerships with retailers such as Walmart, Target, Shopify, Wayfair and others, combined with the Universal Commerce Protocol, are strategic levers in this competition.1,3,4,5 They allow Google to showcase Gemini as a shopping concierge while making it easier for merchants to plug into the ecosystem without bespoke integrations for each AI platform.

OpenAI: from research lab to commerce gateway

OpenAI began as a research-focused organisation with a mission to ensure that artificial general intelligence benefits humanity. Over time, it has commercialised its work through APIs and flagship products such as ChatGPT, which rapidly became one of the fastest-growing consumer applications in history.

As users started to rely on ChatGPT not just for information but for planning and decision-making, the platform became an attractive entry point for commerce. OpenAIs Instant Checkout feature and the Agentic Commerce Protocol reflect an attempt to formalise this role. By enabling users to buy directly within ChatGPT from merchants on platforms like Shopify and Etsy, OpenAI is turning its assistant into a transactional hub.2,3

In this model, the AI agent can browse catalogues, compare options and present distilled choices, collapsing the distance between advice and action. The underlying theory draws on both conversational AI and platform economics: OpenAI positions itself as a neutral interface layer connecting consumers and merchants, while also shaping how information and offers are presented.

Amazon: marketplace, infrastructure and the invisible AI layer

While the provided context focuses more explicitly on Google and OpenAI, Amazon is an equally significant player in AI-powered shopping. Its marketplace already acts as a giant, data-rich environment where search, recommendation and advertising interact.

Amazon has deployed AI across its operations: in demand forecasting, warehouse robotics, delivery routing, pricing optimisation and its Alexa voice assistant. It has also invested heavily in generative AI to enhance product search, summarise reviews and assist sellers with content creation.

From a theoretical standpoint, Amazon exemplifies the vertically integrated platform: it operates the marketplace, offers its own branded products, controls logistics and, increasingly, provides the AI services that mediate discovery. Its approach to AI shopping is therefore as much about improving internal efficiency and customer experience as about creating open protocols.

In the race described by AP, Amazons strengths lie in its end-to-end control of the commerce stack and its granular data on real-world purchasing behaviour. As conversational and agentic interfaces become more common, Amazon is well placed to embed them deeply into its existing shopping flows.

Retailers as co-architects of AI shopping

Although the quote highlights technology companies, retailers such as Walmart, Target and others are not passive recipients of AI tools. They are actively shaping how agentic commerce unfolds. Walmart, for example, has worked with both OpenAI and Google, enabling Instant Checkout in ChatGPT and integrating its catalogue and fulfilment options into Gemini.1,2,3

Walmart executives have spoken about “rewriting the retail playbook” and closing the gap between “I want it” and “I have it” using AI.2 The company has also launched its own AI assistant, Sparky, within its app, and has been candid about how AI will transform roles across its workforce.2

These moves reflect a broader theoretical insight from platform economics: large retailers must navigate their relationships with powerful technology platforms carefully, balancing the benefits of reach and innovation against the risk of ceding too much control over customer relationships. By participating in open protocols and engaging multiple AI partners, retailers seek to maintain some leverage and avoid lock-in.

Other retailers and adjacent companies are exploring similar paths. Home Depot, for instance, has adopted Gemini-based agents to provide project planning and aisle-level guidance in stores, while industrial partners like Honeywell are using AI to turn physical spaces into intelligent, sensor-rich environments.5 These developments blur the line between online and offline shopping, extending the idea of seamless AI-powered commerce into bricks-and-mortar settings.

The emerging theory of AI-mediated markets

As AI agents become more entwined with commerce, several theoretical threads are converging into what might be called the theory of AI-mediated markets:

  • Information symmetry and asymmetry: AI agents can, in principle, reduce information overload and help consumers navigate complex choices. But they also create new asymmetries, as platform owners may know far more about aggregate behaviour than individual users.
  • Algorithmic transparency and accountability: When an AI agent chooses which products to recommend, the criteria may include relevance, profit margins, sponsorship and long-term engagement. Understanding and governing these priorities is an active area of research and regulation.
  • Competition and interoperability: The existence of multiple commerce protocols and agent ecosystems raises questions about interoperability, switching costs and the potential for AI-mediated markets to become more or less competitive than their predecessors.
  • Personalisation versus autonomy: Enhanced personalisation can make shopping more efficient and enjoyable but may also narrow exposure to alternatives or gently steer behaviour in ways that users do not fully perceive.
  • Labour and organisational change: As AI takes on more of the cognitive labour of retail – from customer service to merchandising – the roles of human workers evolve. The theoretical work on technology and labour markets gains a new frontier in AI-augmented retail operations.

Researchers from economics, computer science, law and sociology are increasingly studying these dynamics, building on the earlier theories of platforms, recommendations and behavioural biases but extending them into a world where the primary interface to the market is itself an intelligent agent.

Why this moment matters

The Associated Press quote distils a complex, multi-layered transformation into a single observation: the most powerful technology firms are in a race to define how we shop in an age of AI. The endpoint of that race is not just faster checkout or more targeted ads. It is a restructuring of the basic relationship between consumers, merchants and the digital intermediaries that connect them.

Search boxes and product grids are giving way to conversations. Static ecommerce sites are being replaced or overlaid by agents that can understand context, remember preferences and act on our behalf. The theories of information retrieval, recommendation, platforms and behavioural economics that once described separate facets of digital commerce are converging in these agents.

Understanding the backstory of this quote – the intellectual currents, corporate strategies and emerging protocols behind it – is essential for grasping the stakes of AI-powered shopping. It is not merely a technological upgrade; it is a shift in who designs, controls and benefits from the everyday journeys that connect intention to action in the digital economy.

References

1. https://pulse2.com/walmart-and-google-turn-ai-discovery-into-effortless-shopping-experiences/

2. https://www.thefinance360.com/walmart-partners-with-googles-gemini-to-offer-ai-shopping-assistant-to-shoppers/

3. https://www.businessinsider.com/gemini-chatgpt-openai-google-competition-walmart-deal-2026-1

4. https://retail-insider.com/retail-insider/2026/01/google-expands-ai-shopping-with-walmart-shopify-wayfair/

5. https://cloud.google.com/transform/a-new-era-agentic-commerce-retail-ai

6. https://winningwithwalmart.com/walmart-teams-up-with-google-gemini-what-it-means-for-shoppers-and-suppliers/

read more
Term: Simple exponential smoothing (SES)

Term: Simple exponential smoothing (SES)

“The Exponential Smoothing technique is a powerful forecasting method that applies exponentially decreasing weights to past observations. This method prioritizes recent information, making it significantly more responsive than SMAs to sudden shifts.” – Simple exponential smoothing (SES) –

Simple Exponential Smoothing (SES) is the simplest form of exponential smoothing, a time series forecasting method that applies exponentially decreasing weights to past observations, prioritising recent data to produce responsive forecasts for series without trend or seasonality.1,2,3,5

Core Definition and Mechanism

SES generates point forecasts by recursively updating a single smoothed level value, (\ellt), using the formula:
\ell</em>t = \alpha y<em>t + (1 - \alpha) \ell</em>{t-1}
where (yt) is the observation at time (t), (\ell{t-1}) is the previous level, and (\alpha) (0 < (\alpha) < 1) is the smoothing parameter controlling the weight on the latest observation.1,2,3,5 The forecast for all future periods is then the current level: (\hat{y}{t+h|t} = \ellt).5

Unrolling the recursion reveals exponentially decaying weights:
\hat{y}<em>{t+1} = \alpha \sum</em>{j=0}^{t-1} (1 - \alpha)^j y<em>{t-j} + (1 - \alpha)^t \ell</em>1
Recent observations receive higher weights ((\alpha) for the newest), forming a geometric series that decays rapidly, making SES more reactive to changes than simple moving averages (SMAs).1,3 Initialisation typically estimates (\alpha) and (\ell_1) by minimising loss functions like SSE.1,3

Key Properties and Applications

  • Parameter Interpretation: High (\alpha) (near 1) emphasises recent data, ideal for volatile series; low (\alpha) (near 0) acts like a global average, filtering noise in stable series.1,2
  • Assumptions: Best for stationary data without trend or seasonality; extensions like ETS(A,N,N) address limitations via state-space models.1,4,5
  • Implementation: Widely available in libraries (e.g., smooth::es() in R, statsmodels.tsa.SimpleExpSmoothing in Python).1,2
  • Advantages: Simple, computationally efficient, intuitive for practitioners.1,5 Limitations include point forecasts only (no native intervals pre-state-space advances).1

Examples show SES tracking level shifts effectively with moderate (\alpha), outperforming naïve methods on non-trending data.1,5

Best Related Strategy Theorist: Robert Goodell Brown

Robert G. Brown (1925–2023) is the pioneering theorist most closely linked to SES, having formalised exponential smoothing in his seminal 1956 work Statistical Forecasting for Inventory Control, where he introduced the recursive formula and its inventory applications.1,3

Biography: Born in the US, Brown earned degrees in physics and engineering, serving in the US Navy during WWII on radar and signal processing—experience that shaped his interest in smoothing noisy data.3 Post-war, at the Naval Research Laboratory and later industry roles (e.g., Autonetics), he tackled operational forecasting amid Cold War demands for efficient supply chains. His 1959 book Statistical Forecasting for Inventory Control popularised SES for business, proving it minimised stockouts via weighted averages. Brown’s innovations extended to double and triple smoothing for trends/seasonality, influencing ARIMA and modern ETS frameworks.1,3,5 Collaborations with Charles Holt (Holt-Winters) cemented his legacy; he consulted for firms like GE, authoring over 50 papers. Honoured by INFORMS, Brown’s practical focus bridged theory and strategy, making SES a cornerstone of demand forecasting in supply chain management.3

References

1. https://openforecast.org/adam/SES.html

2. https://www.influxdata.com/blog/exponential-smoothing-beginners-guide/

3. https://en.wikipedia.org/wiki/Exponential_smoothing

4. https://nixtlaverse.nixtla.io/statsforecast/docs/models/simpleexponentialsmoothing.html

5. https://otexts.com/fpp2/ses.html

6. https://qiushiyan.github.io/fpp/exponential-smoothing.html

7. https://learn.netdata.cloud/docs/developer-and-contributor-corner/rest-api/queries/single-or-simple-exponential-smoothing-ses

read more
Quote: Pitchbook

Quote: Pitchbook

“Much of the market continues to find it difficult to raise venture capital funding. Non-AI companies have accounted for just 35% of deal value through Q3 2025, while representing more than 60% of completed deals.” – Pitchbook

PitchBook’s data through Q3 2025 reveals a stark disparity in venture capital (VC) funding, where non-AI companies captured just 35% of total deal value despite comprising over 60% of deals, underscoring investor preference for AI-driven opportunities amid market caution.1,4,5

Context of the Quote

This statistic, sourced from PitchBook’s Q3 2025 Venture Monitor (in collaboration with the National Venture Capital Association), highlights the “flight to quality” trend dominating VC dealmaking. Through the first nine months of 2025, overall deal counts reached 3,990 in Q1 alone (up 11% quarter-over-quarter), with total value hitting $91.5 billion—a post-2022 high driven largely by AI sectors.4,5 However, smaller and earlier-stage non-AI startups received only 36% of total value, the decade’s lowest share, as investors prioritized larger, AI-focused rounds amid uncertainties like tariffs, market volatility, and subdued consumer sentiment.3,4 Fundraising for VC funds also plummeted, with Q1 2025 seeing just 87 vehicles close at $10 billion—the lowest activity in over a decade—and dry powder nearing $300 billion but deploying slowly.4 Exit activity hinted at recovery ($56 billion in Q1 from 385 deals) but faltered due to paused IPOs (e.g., Klarna, StubHub) and reliance on outliers like Coreweave’s IPO, which accounted for nearly 40% of value.4 PitchBook’s H1 2025 VC Tech Survey of 32 investors confirmed this shift: 52% see AI disrupting fintech (up from 32% in H2 2024), with healthcare, enterprise tech, and cybersecurity following suit, while VC outlooks soured (only 38% expect rising funding, down from 58%).1 The quote encapsulates a market where volume persists but value concentrates in AI, leaving non-AI firms struggling for capital in a selective environment.

Backstory on PitchBook

PitchBook, founded in 2007 by John Gabbert in Seattle, emerged as a leading data provider for private capital markets from humble origins as a simple Excel-based tool for tracking VC and private equity deals. Acquired by Morningstar in 2016 for $225 million, it has grown into an authoritative platform aggregating data on over 3 million companies, 1.5 million funds, and millions of deals worldwide, powering reports like the PitchBook-NVCA Venture Monitor.3,4,5 Its Q3 2025 analysis draws from proprietary datasets as of late 2025, offering granular insights into deal counts, values, sector breakdowns, and fundraising—essential for investors navigating post-2022 VC normalization. PitchBook’s influence stems from its real-time tracking and predictive modeling, cited across industry reports for benchmarking trends like AI dominance and liquidity pressures.1,2,4

Leading Theorists on VC Market Dynamics and AI Concentration

The quote aligns with foundational theories on VC cycles, power laws, and technological disruption. Key thinkers include:

  • Bill Janeway (author of Doing Capitalism in the Innovation Economy, 2012): A veteran VC at Warburg Pincus, Janeway theorized VC as a “three-legged stool” of government R&D, entrepreneurial risk-taking, and financial engineering. He predicted funding concentration in breakthrough tech like AI during downturns, as investors seek “moonshots” amid capital scarcity—mirroring 2025’s non-AI value drought.1,4

  • Peter Thiel (co-founder of PayPal, Founders Fund; Zero to One, 2014): Thiel’s “definite optimism” framework argues VCs favor monopolistic, tech-dominant firms (e.g., AI) over competitive commoditized ones, enforcing power-law distributions where 80-90% of returns come from 1-2% of deals. This explains non-AI firms’ deal volume without value, as Thiel warns against “indefinite optimism” in crowded sectors.4

  • Andy Kessler (author of Venture Capital Deals, 1986; Wall Street Journal columnist): Kessler formalized the VC “spray and pray” model evolving into selective bets during liquidity crunches, predicting AI-like waves would eclipse legacy sectors—evident in 2025’s fintech AI disruption forecasts.1

  • Scott Kupor (a16z managing partner; Secrets of Sand Hill Road, 2019): Kupor analyzes LP-VC dynamics, noting how dry powder buildup (nearing $300B in 2025) leads to extended fund timelines and AI favoritism, as LPs demand outsized returns amid low distributions.1,2,4

  • Diane Mulcahy (former Providence Equity; The New World of Entrepreneurship, 2013): Mulcahy critiqued VC overfunding bubbles, advocating “patient capital” for non-hyped sectors; her warnings resonate in 2025’s fundraising cliff and non-AI funding gaps.4

These theorists collectively frame 2025’s trends as a power-law amplification of AI amid cyclical caution, building on historical VC patterns from the dot-com bust to post-2008 recovery.

References

1. https://www.foley.com/insights/publications/2025/06/investor-insights-overview-pitchbook-h1-2025-vc-tech-survey/

2. https://www.sganalytics.com/blog/us-venture-capital-outlook-2025/

3. https://www.deloitte.com/us/en/services/audit-assurance/articles/trends-in-venture-capital.html

4. https://www.junipersquare.com/blog/vc-q1-2025

5. https://nvca.org/wp-content/uploads/2025/10/Q3-2025-PitchBook-NVCA-Venture-Monitor.pdf

read more
Quote: Nathaniel Whittemore – AI Daily Brief

Quote: Nathaniel Whittemore – AI Daily Brief

“If you want to get a preview of what everyone else is going to be dealing with six months from now, there’s basically not much better you can do than watching what developers are talking about right now.” – Nathaniel Whittemore – AI Daily Brief – On: Tailwind CSS and AI disruption

This observation captures a pattern that has repeated itself through every major technology wave of the past half-century. The people who live closest to the tools – the engineers, open source maintainers and framework authors – are usually the first to encounter both the power and the problems that the rest of the world will later experience at scale. In the current artificial intelligence cycle, that dynamic is especially clear: developers are experimenting with new models, agents and workflows months before they become mainstream in business, design and everyday work.

Nathaniel Whittemore and the AI Daily Brief

The quote comes from Nathaniel Whittemore, better known in technology circles as NLW, the host of The AI Daily Brief: Artificial Intelligence News and Analysis (formerly The AI Breakdown).4,7,9 His show has emerged as a daily digest and analytical lens on the rapid cascade of AI announcements, research papers, open source projects and enterprise case studies. Rather than purely cataloguing news, Whittemore focuses on how AI is reshaping business models, labour, creative work and the broader economy.4

Whittemore has built a reputation as an interpreter between worlds: the fast-moving communities of AI engineers and builders on the one hand, and executives, policymakers and non-technical leaders on the other. Episodes range from detailed walkthroughs of specific tools and models to long-read analyses of how organisations are actually deploying AI in the field.1,5 His recurring argument is that the most important AI stories are not just technical; they are about context, incentives and the way capabilities diffuse into real workflows.1,4

On his show and in talks, Whittemore frequently returns to the idea that AI is best understood through its users: the people who push tools to their limits, improvise around their weaknesses and discover entirely new categories of use. In recent years, that has meant tracking developers who integrate AI into code editors, build autonomous agents, or restructure internal systems around AI-native processes.3,8 The quote about watching developers is, in effect, a mental model for anyone trying to see around the next corner.

Tailwind CSS as the context for the quote

The quote lands inside a very specific story: Tailwind CSS as a case study in AI-enabled demand with AI-damaged monetisation.

Tailwind is an open-source, utility-first CSS framework that became foundational to modern front-end development. It is widely adopted by developers and heavily used by AI coding tools. Tailwind’s commercial model, however, depends on a familiar open-source pattern: the core framework is free, and revenue comes from paid add-ons (the “Plus” tier). Critically, the primary channel to market for those paid offerings was the documentation.

AI broke that channel.

As AI coding tools improved, many developers stopped visiting documentation pages. Instead, they asked the model and got the answer immediately—often derived from scraped docs and community content. Usage of Tailwind continued to grow, but the discovery path for paid offerings weakened because humans no longer needed to read the docs. In plain terms: the product stayed popular, but the funnel collapsed.

That is why this story resonated beyond CSS. It shows a broader pattern: AI can remove the need for the interface you monetise—even while it increases underlying adoption. For any business that relies on “users visit our site, then convert,” Tailwind is not a niche developer drama. It is a preview.

Tailwind’s episode makes the mechanism of disruption uncomfortably clear: AI tools boosted adoption, but also removed the need for humans to visit Tailwind’s documentation. That mattered because the documentation was Tailwind’s primary channel to market—where users discovered the paid “Plus” offerings that funded maintenance. Once AI started answering questions directly from scraped content, the funnel broke: fewer doc visits meant fewer conversions, and a widely used framework suddenly struggled to monetise the very popularity AI helped accelerate.

AI Disruption Seen from the Builder Front Line

In the AI era, this pattern is amplified. AI capabilities roll out as research models, APIs and open source libraries long before they are wrapped in polished consumer interfaces. Developers are often the first group to:

  • Benchmark new models, probing their strengths and failure modes.
  • Integrate them into code editors, data pipelines, content tools and internal dashboards.
  • Build specialised agents tuned to niche workflows or industry-specific tasks.6,8
  • Stress-test the economics of running models at scale and find where they can genuinely replace or augment existing systems.3,5

Whittemore’s work sits precisely at this frontier. Episodes dissect the emergence of coding agents, the economics of inference, the rise of AI-enabled “tiny teams”, and the way reasoning models are changing expectations around what software can autonomously do.3,8 He tracks how new agentic capabilities go from developer experiments to production deployments in enterprises, often in less than a year.3,5

His quote reframes this not as a curiosity but as a practical strategy: if you want to understand what your organisation or industry will be wrestling with in six to twelve months – from new productivity plateaus to unfamiliar risks – you should look closely at what AI engineers and open source maintainers are building and debating now.

Developers as Lead Users: Theoretical Roots

Behind Whittemore’s intuition sits a substantial body of innovation research. Long before AI, scholars studied why certain groups seemed to anticipate the needs and behaviours of the wider market. Several theoretical strands help explain why watching developers is so powerful.

Eric von Hippel and Lead User Theory

MIT innovation scholar Eric von Hippel developed lead user theory to describe how some users experience needs earlier and more intensely than the general market. These lead users frequently innovate on their own, building or modifying products to solve their specific problems. Over time, their solutions diffuse and shape commercial offerings.

Developers often fit this lead user profile in technology markets. They are:

  • Confronted with cutting-edge problems first – scaling systems, integrating new protocols, or handling novel data types.
  • Motivated to create tools and workflows that relieve their own bottlenecks.
  • Embedded in communities where ideas, snippets and early projects can spread quickly and be iterated upon.

Tailwind CSS itself reflects this: it emerged as a developer-centric solution to recurring front-end pain points, then radiated outward to reshape how teams approach design systems. In AI, developer-built tooling often precedes large commercial platforms, as seen with early AI coding assistants, monitoring tools and evaluation frameworks.3,8

Everett Rogers and the Diffusion of Innovations

Everett Rogers’ classic work on the diffusion of innovations describes how new ideas spread through populations in phases: innovators, early adopters, early majority, late majority and laggards. Developers often occupy the innovator or early adopter categories for digital technologies.

Rogers stressed that watching these early groups offers a glimpse of future mainstream adoption. Their experiments reveal not only whether a technology is technically possible, but how it will be framed, understood and integrated into social systems. In AI, the debates developers have about safety, guardrails, interpretability and tooling are precursors to the regulatory, ethical and organisational questions that follow at scale.4,5

Clayton Christensen and Disruptive Innovation

Clayton Christensen’s theory of disruptive innovation emphasises how new technologies often begin in niches that incumbents overlook. Early adopters tolerate rough edges because they value new attributes – lower cost, flexibility, or a different performance dimension – that established customers do not yet prioritise.

AI tools and frameworks frequently begin life like this: half-finished interfaces wrapped around powerful primitives, attractive primarily to technical users who can work around their limitations. Developers discover where these tools are genuinely good enough, and in doing so, they map the path by which a once-nascent capability becomes a serious competitive threat.

Open Source Communities and Collective Foresight

Another important line of thinking comes from research on open source software and user-driven innovation. Scholars such as Steven Weber and Yochai Benkler have explored how distributed communities coordinate to build complex systems without traditional firm structures.

These communities act as collective sensing networks. Bug reports, pull requests, issue threads and design discussions form a live laboratory where emerging practices are tested and refined. In AI, this is visible in the rapid evolution of open weights models, fine-tuning techniques, evaluation harnesses and orchestration frameworks. The tempo of progress in these spaces often sets the expectations which commercial vendors then have to match or exceed.6,8

AI-Specific Perspectives: From Labs to Production

Beyond general innovation theory, several contemporary AI thinkers and practitioners shed light on why developer conversations are such powerful predictors.

Andrej Karpathy and the Software 2.0 Vision

Former Tesla AI director Andrej Karpathy popularised the term “Software 2.0” to describe a shift from hand-written rules to learned neural networks. In this paradigm, developers focus less on explicit logic and more on data curation, model selection and feedback loops.

Under a Software 2.0 lens, developers are again early indicators. They experiment with prompt engineering, fine-tuning, retrieval-augmented generation and multi-agent systems. Their day-to-day struggles – with context windows, hallucinations, latency and cost-performance trade-offs – foreshadow the operational questions businesses later face when they automate processes or embed AI in products.

Ian Goodfellow, Yoshua Bengio and Deep Learning Pioneers

Deep learning pioneers such as Ian Goodfellow, Yoshua Bengio and Geoffrey Hinton illustrated how research breakthroughs travel from lab settings into practical systems. What began as improvements on benchmark datasets and academic competitions became, within a few years, the foundation for translation services, recommendation engines, speech recognition and image analysis.

Developers building on these techniques were the bridge between research and industry. They discovered how to deploy models at scale, handle real-world data, and integrate AI into existing stacks. In today’s generative AI landscape, the same dynamic holds: frontier models and architectures are translated into frameworks, SDKs and reference implementations by developer communities, and only then absorbed into mainstream tools.

AI Engineers and the Rise of Agents

Recent work at the intersection of AI and software engineering has focused on agents: AI systems that can plan, call tools, write and execute code, and iteratively refine their own outputs. Industry reports summarised on The AI Daily Brief highlight how executives are beginning to grasp the impact of these agents on workflows and organisational design.5

Yet developers have been living with these systems for longer. They are the ones:

  • Embedding agents into CI/CD pipelines and testing regimes.
  • Using them to generate and refactor large codebases.3,6
  • Designing guardrails and permissions to keep them within acceptable bounds.
  • Developing evaluation harnesses to measure quality, robustness and reliability.8

Their experiments and post-mortems provide an unvarnished account of both the promise and the fragility of agentic systems. When Whittemore advises watching what developers are talking about, this is part of what he means: the real-world friction points that will later surface as board-level concerns.

Context, Memory and Business Adoption

Whittemore has also emphasised how advances in context and memory – the ability of AI systems to integrate and recall large bodies of information – are changing what is possible in the enterprise.1 He highlights features such as:

  • Tools that allow models to access internal documents, code repositories and communication platforms securely, enabling organisation-specific reasoning.1
  • Modular context systems that let AI draw on different knowledge packs depending on the task.1
  • Emerging expectations that AI should “remember” ongoing projects, preferences and constraints rather than treating each interaction as isolated.1

Once again, developers are at the forefront. They are wiring these systems into data warehouses, knowledge graphs and production applications. They see early where context systems break, where privacy models need strengthening, and where the productivity gains are real rather than speculative.

From there, insights filter into broader business discourse: about data governance, AI strategy, vendor selection and the design of AI-native workflows. The lag between developer experience and executive recognition is, in Whittemore’s estimate, often measured in months – hence his six-month framing.

From Developer Talk to Strategic Foresight

The core message behind the quote is a practical discipline for anyone thinking about AI and software-driven change:

  • Follow where developers invest their time. Tools that inspire side projects, plugin ecosystems and community events often signal deeper shifts in how work will be done.
  • Listen to what frustrates them. Complaints about context limits, flaky APIs or insufficient observability reveal where new infrastructure, standards or governance will be needed.
  • Pay attention to what they take for granted. When a capability stops being exciting and becomes expected – instant code search, semantic retrieval, AI-assisted refactoring – it is often a sign that broader expectations in the market will soon adjust.
  • Watch the crossovers. When developer patterns show up in no-code tools, productivity suites or design platforms, the wave is moving from early adopters to the early majority.

Nathaniel Whittemore’s work with The AI Daily Brief is, in many ways, a structured practice of this approach. By curating, analysing and contextualising what builders are doing and saying in real time, he offers a way for non-technical leaders to see the outlines of the future before it is evenly distributed.4,7,9 The Tailwind CSS example is one case; the ongoing wave of AI disruption is another. The constant, across both, is that if you want to know what is coming next, you start by watching the people building it.

 

References

1. https://pod.wave.co/podcast/the-ai-daily-brief-formerly-the-ai-breakdown-artificial-intelligence-news-and-analysis/ai-context-gets-a-major-upgrade

2. https://www.youtube.com/watch?v=MdfYA3xv8jw

3. https://www.youtube.com/watch?v=0EDdQchuWsA

4. https://podcasts.apple.com/us/podcast/the-ai-daily-brief-artificial-intelligence-news/id1680633614

5. https://www.youtube.com/watch?v=nDDWWCqnR60

6. https://www.youtube.com/watch?v=f34QFs7tVjg

7. https://open.spotify.com/show/7gKwwMLFLc6RmjmRpbMtEO

8. https://podcasts.apple.com/us/podcast/the-biggest-trends-from-the-ai-engineer-worlds-fair/id1680633614?i=1000711906377

9. https://www.audible.com/podcast/The-AI-Breakdown-Daily-Artificial-Intelligence-News-and-Discussions/B0C3Q4BG17

 

read more
Term: Simple Moving Average (SMA)

Term: Simple Moving Average (SMA)

“Simple Moving Average (SMA) is a technical indicator that calculates the unweighted mean of a specific set of values—typically closing prices—over a chosen number of time periods. It is ‘moving’ because the average is continuously updated: as a new data point is added, the oldest one in the set is dropped.” – Simple Moving Average (SMA)

Simple Moving Average (SMA) is a fundamental technical indicator in financial analysis and trading, calculated as the unweighted arithmetic mean of a security’s closing prices over a specified number of time periods, continuously updated by incorporating the newest price and excluding the oldest.1,2,3

Calculation and Formula

The SMA for a period of ( n ) days is given by:
[
\text{SMA}n = \frac{Pt + P{t-1} + \cdots + P{t-n+1}}{n}
]
where ( P_t ) represents the closing price at time ( t ).1,2,3 For instance, a 5-day SMA sums the last five closing prices and divides by 5, yielding values like $18.60 from sample prices of $13, $18, $18, $20, and $24.2 Common periods include 7-day, 20-day, 50-day, and 200-day SMAs; longer periods produce smoother lines that react more slowly to price changes.1,5

Applications in Trading

SMAs smooth price fluctuations to reveal underlying trends: prices above the SMA indicate an uptrend, while prices below signal a downtrend.1,4 Key uses include:

  • Trend identification: The SMA’s slope shows trend direction and strength.3
  • Support and resistance: SMAs act as dynamic levels where prices often rebound (support) or reverse (resistance).1,5
  • Crossover signals:
  • Golden Cross: Shorter-term SMA (e.g., 5-day) crosses above longer-term SMA (e.g., 20-day), suggesting a buy.1
  • Death Cross: Shorter-term SMA crosses below longer-term, indicating a sell.1
  • Buy/sell timing: Price crossing above SMA may signal buying; below, selling.2,4

As a lagging indicator relying on historical data, SMA equal-weights all points, unlike the Exponential Moving Average (EMA), which prioritises recent prices for greater responsiveness.2

Best Related Strategy Theorist: Richard Donchian

Richard Donchian (1905–1997), often called the “father of trend following,” pioneered systematic trading strategies incorporating moving averages, including early SMA applications, through his development of trend-following systems in the mid-20th century.[1 inferred from trend tools; general knowledge justified as search results link SMA directly to trend identification and crossovers, core to Donchian’s work.]

Born in Hartford, Connecticut, to Armenian immigrant parents, Donchian graduated from Yale University in 1928 with a degree in economics. He began his career at A.A. Housman & Co. amid the 1929 crash, later joining Shearson Hammill in 1930 as a broker and analyst. Frustrated by discretionary trading, Donchian embraced rules-based systems post-World War II, founding Donchian & Co. in 1949 as the first commodity trading fund manager.

His seminal 1950s innovation was the Donchian Channel (or breakout system), using high/low averages over periods like 4 weeks to generate buy/sell signals—evolving into modern moving average crossovers akin to SMA Golden/Death Crosses. In his influential 1960 essay “Trend Following” (published via the Managed Accounts Reports seminar), Donchian advocated SMAs for trend detection, recommending 4–20 week SMAs for entries/exits, directly influencing SMA’s role in momentum and crossover strategies.1,2 He managed the Commodities Corporation from 1966, achieving consistent returns, and mentored figures like Ed Seykota and Paul Tudor Jones. Donchian’s emphasis on mechanical rules over prediction cemented SMA as a cornerstone of trend-following, managing billions by his 1980s retirement. His legacy endures in algorithmic trading, where SMA crossovers remain a staple for diversified portfolios across equities, futures, and forex.1,5,6

References

1. https://www.alphavantage.co/simple_moving_average_sma/

2. https://corporatefinanceinstitute.com/resources/career-map/sell-side/capital-markets/simple-moving-average-sma/

3. https://toslc.thinkorswim.com/center/reference/Tech-Indicators/studies-library/R-S/SimpleMovingAvg

4. https://www.youtube.com/watch?v=TRy9InVeFc8

5. https://www.schwab.com/learn/story/how-to-trade-simple-moving-averages

6. https://www.cmegroup.com/education/courses/technical-analysis/understanding-moving-averages.html

read more
Quote: Blackrock

Quote: Blackrock

“AI’s buildout is also happening at a potentially unprecedented speed and scale. This shift to capital-intensive growth from capital-light, is profoundly changing the investment environment – and pushing limits on multiple fronts, physical, financial and socio-political.” – Blackrock

The quote highlights BlackRock’s observation that artificial intelligence (AI) infrastructure development is advancing at an extraordinary pace and magnitude, shifting economic growth models from low-capital-intensity (e.g., software-driven scalability) to high-capital demands, while straining physical infrastructure like power grids, financial systems through massive leverage needs, and socio-political frameworks amid geopolitical tensions.1,2

Context of the Quote

This statement emerges from BlackRock’s 2026 Investment Outlook, published by the BlackRock Investment Institute (BII), the firm’s research arm focused on macro trends and portfolio strategy. It encapsulates discussions from BlackRock’s internal 2026 Outlook Forum in late 2025, where AI’s “buildout”—encompassing data centers, chips, and energy infrastructure—dominated debates among portfolio managers.2 Key concerns included front-loaded capital expenditures (capex) estimated at $5-8 trillion globally through 2030, creating a “financing hump” as revenues lag behind spending, potentially requiring increased leverage in an already vulnerable financial system.1,3,5 Physical limits like compute capacity, materials, and especially U.S. power grid strain were highlighted, with AI data centers projected to drive massive electricity demand amid U.S.-China strategic competition.2 Socio-politically, it ties into “mega forces” like geopolitical fragmentation, blurring public-private boundaries (e.g., via stablecoins), and policy shifts from inflation control to neutral stances, fostering market dispersion where only select AI beneficiaries thrive.2,4 BlackRock remains pro-risk, overweighting U.S. AI-exposed stocks, active strategies, private credit, and infrastructure while underweighting long-term Treasuries.1,5

BlackRock and the Quoted Perspective

BlackRock, the world’s largest asset manager with nearly $14 trillion in assets under management as of late 2025, issues annual outlooks to guide institutional and retail investors.3 The quote aligns with BII’s framework of “mega forces”—structural shifts like AI, geopolitics, and financial evolution—launched years prior to frame investments in a fragmented macro environment.2 Key voices include Rick Rieder, BlackRock’s Chief Investment Officer of Fixed Income, who in related 2026 insights emphasized AI as a “cost and margin story,” potentially slashing labor costs (55% of business expenses) by 5%, unlocking $1.2 trillion in annual U.S. savings and $82 trillion in present-value corporate profits.4 BII analysts note AI’s speed surpasses prior tech waves, with capex ambitions making “micro macro,” though uncertainties persist on revenue capture by tech giants versus broader dispersion.1,3

Backstory on Leading Theorists of AI’s Economic Transformation

The quote draws on decades of economic theory about technological revolutions, capital intensity, and growth limits, pioneered by thinkers who analyzed how innovations like electrification and computing reshaped productivity, investment, and society.

  • Robert Gordon (The Rise and Fall of American Growth, 2016): Gordon, an NBER economist, argues U.S. productivity growth has stagnated since 1970 (averaging ~2% annually over 150 years) due to diminishing returns from past innovations like electricity and sanitation, contrasting AI’s potential but warning of “hump”-like front-loaded costs without guaranteed back-loaded gains—mirroring BlackRock’s financing concerns.3,4

  • Erik Brynjolfsson and Andrew McAfee (The Second Machine Age, 2014; Machine, Platform, Crowd, 2017): MIT scholars at the Initiative on the Digital Economy posit AI enables exponential productivity via automation of cognitive tasks, shifting from capital-light digital scaling to infrastructure-heavy buildouts (e.g., data centers), but predict “recombination” winners amid labor displacement and inequality—echoing BlackRock’s dispersion and socio-political strains.4

  • Daron Acemoglu and Simon Johnson (Power and Progress, 2023): MIT economists critique tech optimism, asserting AI’s direction depends on institutional choices; undirected buildouts risk elite capture and gridlock (physical/financial limits), not broad prosperity, aligning with BlackRock’s U.S.-China rivalry and policy debates.2

  • Nicholas Crafts (historical productivity scholar): Building on 20th-century analyses, Crafts documented electrification’s 1920s-1930s “productivity paradox”—decades of heavy capex before payoffs—paralleling AI’s current phase, where investments outpace adoption.1

  • Jensen Huang (NVIDIA CEO, practitioner-theorist): While not academic, Huang’s 2024-2025 forecasts of $1 trillion+ annual AI capex by 2030 popularized the “buildout” narrative, influencing BlackRock’s scale estimates and energy focus.3,5

These theorists underscore AI as a capital-intensive pivot akin to the Second Industrial Revolution, but accelerated, with BlackRock synthesizing their ideas into actionable investment amid 2025-2026 market highs (e.g., Nasdaq peaks) and volatility (e.g., tech routs).2,3

References

1. https://www.blackrock.com/americas-offshore/en/insights/blackrock-investment-institute/outlook

2. https://www.medirect.com.mt/updates/news/all-news/blackrock-commentary-ai-front-and-center-at-our-2026-forum/

3. https://www.youtube.com/watch?v=Ww7Zy3MAWAs

4. https://www.blackrock.com/us/financial-professionals/insights/investing-in-2026

5. https://www.blackrock.com/us/financial-professionals/insights/ai-stocks-alternatives-and-the-new-market-playbook-for-2026

6. https://www.blackrock.com/corporate/insights/blackrock-investment-institute/publications/outlook

read more
Term: The VIX

Term: The VIX

VIX is the ticker symbol and popular name for the CBOE Volatility Index, a popular measure of the stock market’s expectation of volatility based on S&P 500 index options. It is calculated and disseminated on a real-time basis by the CBOE, and is often referred to as the fear index. – The VIX

**The VIX, or CBOE Volatility Index (ticker symbol ^VIX), measures the market’s expectation of *30-day forward-looking volatility* for the S&P 500 Index, calculated in real-time from the weighted prices of S&P 500 (SPX) call and put options across a wide range of strike prices.** Often dubbed the “fear index”, it quantifies implied volatility as a percentage, reflecting investor uncertainty and anticipated price swings—higher values signal greater expected turbulence, while lower values indicate calm markets.1,2,3,4,5

Key Characteristics and Interpretation

  • Calculation method: The VIX derives from the midpoints of real-time bid/ask prices for near-term SPX options (typically first and second expirations). It aggregates variances, interpolates to a constant 30-day horizon, takes the square root for standard deviation, and multiplies by 100 to express annualised implied volatility at a 68% confidence interval. For instance, a VIX of 13.77% implies the S&P 500 is expected to move no more than ±13.77% over the next year (or scaled equivalents for shorter periods like 30 days) with 68% probability.1,3
  • Market signal: It inversely correlates with the S&P 500—rising during stress (e.g., >30 signals extreme swings; peaked at 85% in 2008 crisis) and falling in stability. Long-term average is ~18.47%; below 20% suggests moderate risk, while <15% may hint at complacency.1,2,4
  • Uses: Traders gauge sentiment, hedge positions, or trade VIX futures/options/products. It reflects option premiums as “insurance” costs, not historical volatility.1,2,5

Historical Context and Levels

VIX Range Interpretation Example Context
0-15 Optimism, low volatility Normal bull markets2
15-25 Moderate volatility Typical conditions2
25-30 Turbulence, waning confidence Pre-crisis jitters2
30+ High fear, extreme swings 2008 crisis (>50%)1

Extreme spikes are short-lived as traders adjust exposures.1,4

Best Related Strategy Theorist: Sheldon Natenberg

Sheldon Natenberg stands out as the premier theorist linking volatility strategies to indices like the VIX, through his seminal work Option Volatility and Pricing (first published 1988, McGraw-Hill; updated editions ongoing), a cornerstone for professionals trading volatility via options—the core input for VIX calculation.1,3

Biography: Born in the US, Natenberg began as a pit trader on the Chicago Board Options Exchange (CBOE) floor in the 1970s-1980s, during the explosive growth of listed options post-1973 CBOE founding. He traded equity and index options, honing expertise in volatility dynamics amid early market innovations. By the late 1980s, he distilled decades of floor experience into his book, which demystifies implied volatility surfaces, vega (volatility sensitivity), volatility skew, and strategies like straddles/strangles—directly underpinning VIX methodology introduced in 1993.3 Post-trading, Natenberg became a senior lecturer at the Options Institute (CBOE’s education arm), training thousands of traders until retiring around 2010. He consults and speaks globally, influencing modern vol trading.

Relationship to VIX: Natenberg’s framework predates and informs VIX computation, emphasising how option prices embed forward volatility expectations—precisely what the VIX aggregates from SPX options. His models for pricing under volatility regimes (e.g., mean-reverting processes) guide VIX interpretation and trading (e.g., volatility arbitrage). Traders rely on his “vol cone” and skew analysis to contextualise VIX spikes, making his work indispensable for “fear index” strategies. No other theorist matches his practical CBOE-rooted fusion of volatility theory and VIX-applied tactics.1,2,3,4

References

1. https://corporatefinanceinstitute.com/resources/career-map/sell-side/capital-markets/vix-volatility-index/

2. https://www.nerdwallet.com/investing/learn/vix

3. https://www.td.com/ca/en/investing/direct-investing/articles/understanding-vix

4. https://www.ig.com/en/indices/what-is-vix-how-do-you-trade-it

5. https://www.cboe.com/tradable-products/vix/

6. https://www.fidelity.com.sg/beginners/what-is-volatility/volatility-index

7. https://www.youtube.com/watch?v=InDSxrD4ZSM

8. https://www.spglobal.com/spdji/en/education-a-practitioners-guide-to-reading-vix.pdf

VIX is the ticker symbol and popular name for the CBOE Volatility Index, a popular measure of the stock market's expectation of volatility based on S&P 500 index options. It is calculated and disseminated on a real-time basis by the CBOE, and is often referred to as the fear index. - Term: The VIX

read more
Quote: Blackrock

Quote: Blackrock

“AI is not only an innovation itself but has the potential to accelerate other innovation.” – Blackrock

This quote originates from BlackRock’s 2026 Investment Outlook published by its Investment Institute, emphasizing AI’s dual role as a transformative technology and a catalyst for broader innovation across sectors like connectivity, security, and physical automation.6 BlackRock positions AI as a “mega force” driving digital disruption, with potential to automate tasks, enhance productivity, and unlock economic growth by enabling faster advancements in other fields.5,6

Context of the Quote

The statement reflects BlackRock’s strategic focus on AI as a cornerstone of long-term investment opportunities amid rapid technological evolution. In the 2026 Investment Outlook, BlackRock highlights AI’s capacity to go beyond task automation, fostering an “intelligence revolution” that amplifies innovation in interconnected technologies.1,6 This aligns with BlackRock’s actions, including launching active ETFs like the iShares A.I. Innovation and Tech Active ETF (BAI), which targets 20-40 global AI companies across infrastructure, models, and applications to capture growth in the AI stack.1,8 Tony Kim, head of BlackRock’s fundamental equities technology group, described this as seizing “outsized and overlooked investment opportunities across the full stack of AI and advanced technologies.”1 Similarly, the firm views active ETFs as the “next frontier in investment innovation,” expanding access to AI-driven returns.1

BlackRock’s commitment extends to massive infrastructure investments. In 2024, it co-founded the Global AI Infrastructure Investment Partnership (GAIIP, later AIP) with Global Infrastructure Partners (GIP), Microsoft, and MGX, aiming to mobilize up to $100 billion for U.S.-focused data centers and power infrastructure to support AI scaling.2,3,9 Larry Fink, BlackRock’s Chairman and CEO, stated these investments “will help power economic growth, create jobs, and drive AI technology innovation,” underscoring AI’s role in revitalizing economies.2 By 2025, NVIDIA and xAI joined AIP, reinforcing its open-architecture approach to accelerate AI factories and supply chains.3 BlackRock executives like Alex Brazier argue AI investments face no bubble risk; instead, capacity constraints in computing power and data centers demand more capital.4

BlackRock’s Backstory and Leadership

BlackRock, the world’s largest asset manager with $11.5 trillion in assets, evolved from a fixed-income specialist founded in 1988 by Larry Fink and partners at Blackstone into a global powerhouse after its 1994 spin-off and 2009 Barclays acquisition.2 Under Fink’s leadership since inception, BlackRock pioneered ETFs via iShares (acquired 2009) and Aladdin risk-management software, managing $32 billion in U.S. active ETFs.1 Its AI strategy integrates proprietary insights from the BlackRock Investment Institute, which identifies AI as interplaying with other “mega forces” like geopolitics and sustainability.5,6 Fink, a mortgage-backed securities innovator during the 1980s savings-and-loan crisis, has championed infrastructure and tech since steering BlackRock public in 1999; his AIP comments frame AI as a multi-trillion-dollar opportunity.2,3

Leading Theorists on AI as an Innovation Accelerator

The idea of AI accelerating other innovations traces to foundational thinkers in technology diffusion, general-purpose technologies (GPTs), and computational economics:

  • Erik Brynjolfsson and Andrew McAfee (MIT): In The Second Machine Age (2014) and subsequent works, they argue AI as a GPT—like electricity—initially boosts productivity slowly but then accelerates innovation across industries by enabling data-driven decisions and automation.5,6 Their research quantifies AI’s “exponential” complementarity, where it amplifies human ingenuity in fields like biotech and materials science.

  • Bengt Holmström and Paul Milgrom (Nobel 2019): Their principal-agent theories underpin AI’s role in aligning incentives for innovation; AI reduces information asymmetries, speeding R&D in multi-agent systems like supply chains.2

  • Jensen Huang (NVIDIA CEO): A practitioner-theorist, Huang describes accelerated computing and generative AI as powering the “next industrial revolution,” converting data into intelligence to propel every industry—echoed in his AIP role.2,3

  • Satya Nadella (Microsoft CEO): Frames AI as driving “growth across every sector,” with infrastructure as the enabler for breakthroughs, aligning with BlackRock’s partnerships.2

  • Historical roots: Building on Solow’s productivity paradox (1987)—why computers took decades to boost growth—theorists like Robert Gordon contrast narrow tech impacts with AI’s potential for broad acceleration, as BlackRock’s outlook affirms.6

These perspectives inform BlackRock’s view: AI isn’t isolated but a multiplier, demanding infrastructure to realize its full accelerative power.1,2,6

References

1. https://www.investmentnews.com/etfs/blackrock-broadens-active-etf-shelf-with-ai-and-tech-funds/257815

2. https://news.microsoft.com/source/2024/09/17/blackrock-global-infrastructure-partners-microsoft-and-mgx-launch-new-ai-partnership-to-invest-in-data-centers-and-supporting-power-infrastructure/

3. https://ir.blackrock.com/news-and-events/press-releases/press-releases-details/2025/BlackRock-Global-Infrastructure-Partners-Microsoft-and-MGX-Welcome-NVIDIA-and-xAI-to-the-AI-Infrastructure-Partnership-to-Drive-Investment-in-Data-Centers-and-Enabling-Infrastructure/default.aspx

4. https://getcoai.com/news/blackrock-exec-says-ai-investments-arent-in-a-bubble-capacity-is-the-real-problem/

5. https://www.blackrock.com/corporate/insights/blackrock-investment-institute/publications/mega-forces/artificial-intelligence

6. https://www.blackrock.com/corporate/insights/blackrock-investment-institute/publications/outlook

7. https://www.blackrock.com/uk/individual/products/339936/blackrock-ai-innovation-fund

8. https://www.blackrock.com/us/financial-professionals/products/339081/ishares-a-i-innovation-and-tech-active-etf

9. https://www.global-infra.com/news/mgx-blackrock-global-infrastructure-partners-and-microsoft-welcome-kuwait-investment-authority-kia-to-the-ai-infrastructure-partnership/

read more
Term: Covered call

Term: Covered call

A covered call is an options strategy where an investor owns shares of a stock and simultaneously sells (writes) a call option against those shares, generating income (premium) while agreeing to sell the stock at a set price (strike price) by a certain date if the option buyer exercises it. – Covered call

1,2,3

Key Components and Mechanics

  • Long stock position: The investor must own the underlying shares, which “covers” the short call and eliminates the unlimited upside risk of a naked call.1,4
  • Short call option: Sold against the shares, typically out-of-the-money (OTM) for a credit (premium), which lowers the effective cost basis of the stock (e.g., stock bought at $45 minus $1 premium = $44 breakeven).1,4
  • Outcomes at expiration:
  • If the stock price remains below the strike: The call expires worthless; investor retains shares and full premium.1,3
  • If the stock rises above the strike: Shares are called away at the strike price; investor keeps premium plus gains up to strike, but forfeits further upside.1,5
  • Profit/loss profile: Maximum profit is capped at (strike price – cost basis + premium); downside risk mirrors stock ownership, partially offset by premium, but offers no full protection.1,5

Example

Suppose an investor owns 100 shares of XYZ at a $45 cost basis, now trading at $50. They sell one $55-strike call for $1 premium ($100 credit):

  • Effective cost basis: $44.
  • Breakeven: $44.
  • Max profit: $1,100 if called away at $55.
  • Max loss: Unlimited downside (e.g., $4,400 if stock falls to $0).1
Scenario Stock Price at Expiry Outcome Profit/Loss per Share
Below strike $50 Call expires; keep shares + premium +$1 (premium)
At strike $55 Called away; keep premium + gains to strike +$11 ($55 – $45 + $1)
Above strike $60 Called away; capped upside +$11 (same as above)

Advantages and Risks

  • Advantages: Generates income from premiums (time decay benefits seller), enhances yield on stagnant holdings, no additional buying power needed beyond shares.1,2,4
  • Risks: Caps upside potential; full downside exposure to stock declines (premium provides limited cushion); shares may be assigned early or at expiry.1,5

Variations

  • Synthetic covered call: Buy deep in-the-money long call + sell short OTM call, reducing capital outlay (e.g., $4,800 vs. $10,800 traditional).2

Best Related Strategy Theorist: William O’Neil

William J. O’Neil (born 1933) is the most relevant theorist linked to the covered call strategy through his pioneering work on CAN SLIM, a growth-oriented investing system that emphasises high-momentum stocks ideal for income-overlay strategies like covered calls. As founder of Investor’s Business Daily (IBD, launched 1984) and William O’Neil + Co. Inc. (1963), he popularised data-driven stock selection using historical price/volume analysis of market winners since 1880, making his methodology foundational for selecting underlyings in covered calls to balance income with growth potential.[Search knowledge on O’Neil’s biography and CAN SLIM.]

Biography and Relationship to Covered Calls

O’Neil began as a stockbroker at Hayden, Stone & Co. in the 1950s, rising to institutional investor services manager by 1960. Frustrated by inconsistent advice, he founded William O’Neil + Co. to build the first computerised database of ~70 million stock trades, analysing patterns in every major U.S. winner. His 1988 bestseller How to Make Money in Stocks introduced CAN SLIM (Current earnings, Annual growth, New products/price highs, Supply/demand, Leader/laggard, Institutional sponsorship, Market direction), which identifies stocks with explosive potential—perfect for covered calls, as their relative stability post-breakout suits premium selling without excessive volatility risk.

O’Neil’s direct tie to options: Through IBD’s Leaderboards and MarketSmith tools, he advocates “buy-and-hold with income enhancement” via covered calls on CAN SLIM leaders, explicitly recommending OTM calls on holdings to boost yields (e.g., 2-5% monthly premiums). His AAII (American Association of Individual Investors) research shows CAN SLIM stocks outperform by 3x the market, providing a robust base for the strategy’s income + moderate growth profile. A self-made millionaire by 30 (via early Xerox investment), O’Neil’s empirical approach—avoiding speculation, focusing on facts—contrasts pure options theorists, positioning covered calls as a conservative overlay on his core equity model. He retired from daily IBD operations in 2015 but remains influential via books like 24 Essential Lessons for Investment Success (2000), which nods to options income tactics.

References

1. https://tastytrade.com/learn/trading-products/options/covered-call/

2. https://leverageshares.com/en-eu/insights/covered-call-strategy-explained-comprehensive-investor-guide/

3. https://www.schwab.com/learn/story/options-trading-basics-covered-call-strategy

4. https://www.stocktrak.com/what-is-a-covered-call/

5. https://www.swanglobalinvestments.com/what-is-a-covered-call/

6. https://www.youtube.com/watch?v=wwceg3LYKuA

7. https://www.youtube.com/watch?v=NO8VB1bhVe0

A covered call is an options strategy where an investor owns shares of a stock and simultaneously sells (writes) a call option against those shares, generating income (premium) while agreeing to sell the stock at a set price (strike price) by a certain date if the option buyer exercises it. - Term: Covered call

read more
Quote: Kaoutar El Maghraoui

Quote: Kaoutar El Maghraoui

“We can’t keep scaling compute, so the industry must scale efficiency instead.” – Kaoutar El Maghraoui – IBM Principal Research Scientist

“We can’t keep scaling compute, so the industry must scale efficiency instead.” – Kaoutar El Maghraoui, IBM Principal Research Scientist

This quote underscores a pivotal shift in AI development: as raw computational power reaches physical and economic limits, the focus must pivot to efficiency through optimized hardware, software co-design, and novel architectures like analog in-memory computing.1,2

Backstory and Context of Kaoutar El Maghraoui

Dr. Kaoutar El Maghraoui is a Principal Research Scientist at IBM’s T.J. Watson Research Center in Yorktown Heights, NY, where she leads the AI testbed at the IBM Research AI Hardware Center—a global hub advancing next-generation accelerators and systems for AI workloads.1,2 Her work centers on the intersection of systems research and artificial intelligence, including distributed systems, high-performance computing (HPC), and AI hardware-software co-design. She drives open-source development and cloud experiences for IBM’s digital and analog AI accelerators, emphasizing operationalization of AI in hybrid cloud environments.1,2

El Maghraoui’s career trajectory reflects deep expertise in scalable systems. She earned her PhD in Computer Science from Rensselaer Polytechnic Institute (RPI) in 2007, following a Master’s in Computer Networks (2001) and Bachelor’s in General Engineering from Al Akhawayn University, Morocco. Early roles included lecturing at Al Akhawayn and research on IBM’s AIX operating system—covering performance tuning, multi-core scheduling, Flash SSD storage, and OS diagnostics using IBM Watson cognitive tech.2,6 In 2017, she co-led IBM’s Global Technology Outlook, shaping the company’s AI leadership vision across labs and units.1,2

The quote emerges from her lectures and research on efficient AI deployment, such as “Powering the Future of Efficient AI through Approximate and Analog In-Memory Computing,” which addresses performance bottlenecks in deep neural networks (DNNs), and “Platform for Next-Generation Analog AI Hardware Acceleration,” highlighting Analog In-Memory Computing (AIMC) to reduce energy losses in DNN inference and training.1 It aligns with her 2026 co-authored paper “STARC: Selective Token Access with Remapping and Clustering for Efficient LLM Decoding on PIM Systems” (ASPLOS 2026), targeting efficiency in large language models via processing-in-memory (PIM).2 With over 2,045 citations on Google Scholar, her contributions span AI hardware optimization and performance.8

Beyond research, El Maghraoui is an ACM Distinguished Member and Speaker, Senior IEEE Member, and adjunct professor at Columbia University. She holds awards like the 2021 Best of IBM, IBM Eminence and Excellence for advancing women in tech, 2021 IEEE TCSVC Women in Service Computing, and 2022 IBM Technical Corporate Award. Leadership roles include global vice-chair of Arab Women in Computing (ArabWIC), co-chair of IBM Research Watson Women Network (2019-2021), and program/general co-chair for Grace Hopper Celebration (2015-2016).1,2

Leading Theorists in AI Efficiency and Compute Scaling Limits

The quote resonates with foundational theories on compute scaling limits and efficiency paradigms, pioneered by key figures challenging Moore’s Law extensions in AI hardware.

Theorist Key Contributions Relevance to Quote
Cliff Young & Contributors (Google) Co-authored “Scaling Laws for Neural Language Models” (2020, arXiv) and MLPerf benchmarks; advanced hardware-aware neural architecture search (NAS) for DNN optimization on edge devices.1 Demonstrates efficiency gains via NAS, directly echoing El Maghraoui’s lectures on hardware-specific DNN design to bypass compute scaling.1
Bill Dally (NVIDIA) Pioneer of processing-in-memory (PIM) and tensor cores; authored works on energy-efficient architectures amid “end of Dennard scaling” (power density limits post-2000s).2 Warns against endless compute scaling; promotes PIM and sparsity, aligning with El Maghraoui’s STARC paper and analog accelerators.2
Jeff Dean (Google) Formulated Chinchilla scaling laws (2022), showing optimal compute allocation balances parameters and data; co-developed TensorFlow and TPUs for efficiency.2 Highlights diminishing returns of pure compute scaling, urging efficiency in training/inference—core to IBM’s AI Hardware Center focus.1,2
Hadi Esmaeilzadeh (Georgia Tech) Introduced neurocube and analog in-memory computing (AIMC) concepts (e.g., “Navigating the Energy Wall” papers); quantified AI’s “memory wall” and von Neumann bottlenecks.1 Foundational for El Maghraoui’s AIMC advocacy, proving analog methods boost DNN efficiency by 10-100x over digital compute scaling.1
Song Han (MIT) Developed pruning, quantization, and NAS (e.g., TinyML, HAWQ frameworks); showed 90%+ parameter reduction without accuracy loss.1 Enables “scale efficiency” for real-world deployment, as in El Maghraoui’s “Optimizing Deep Learning for Real-World Deployment” lecture.1

These theorists collectively established that post-Moore’s Law (transistor density doubling every ~2 years, slowing since 2010s), AI progress demands efficiency multipliers: sparsity, analog compute, co-design, and beyond-von Neumann architectures. El Maghraoui’s work operationalizes these at IBM scale, from cloud-native DL platforms to PIM for LLMs.1,2,6

References

1. https://speakers.acm.org/speakers/el_maghraoui_19271

2. https://research.ibm.com/people/kaoutar-el-maghraoui

3. https://github.com/kaoutar55

4. https://orcid.org/0000-0002-1967-8749

5. https://www.sharjah.ac.ae/-/media/project/uos/sites/uos/research/conferences/wirf2025/webinars/dr-kaoutar-el-maghraoui-_webinar.pdf

6. https://s3.us.cloud-object-storage.appdomain.cloud/res-files/1843-Kaoutar_ElMaghraoui_CV_Dec2022.pdf

7. https://www.womentech.net/speaker/all/all/69100

8. https://scholar.google.com/citations?user=yDp6rbcAAAAJ&hl=en

“We can’t keep scaling compute, so the industry must scale efficiency instead.” - Quote: Kaoutar El Maghraoui

read more
Term: Real option

Term: Real option

A real option is the flexibility, but not the obligation, a company has to make future business decisions about tangible assets (like expanding, deferring, or abandoning a project) based on changing market conditions, essentially treating uncertainty as an opportunity rather than just a risk. – Real option –

Real Option

1,2,3.

Core Characteristics and Value Proposition

Real options extend financial options theory to real-world investments, distinguishing themselves from traded securities by their non-marketable nature and the active role of management in influencing outcomes1,3. Key features include:

  • Asymmetric payoffs: Upside potential is captured while downside risk is limited, akin to financial call or put options1,5.
  • Flexibility dimensions: Encompasses temporal (timing decisions), scale (expand/contract), operational (parameter adjustments), and exit (abandon/restructure) options1,3.
  • Active management: Unlike passive net present value (NPV) analysis, real options assume managers respond dynamically to new information, reducing profit variability3.

Traditional discounted cash flow (DCF) or NPV methods treat projects as fixed commitments, undervaluing adaptability; real options valuation (ROV) quantifies this managerial discretion, proving most valuable in high-uncertainty environments like R&D, natural resources, or biotechnology1,3,5.

Common Types of Real Options

Type Description Analogy to Financial Option Example
Option to Expand Right to increase capacity if conditions improve Call option Building excess factory capacity for future scaling3,5
Option to Abandon Right to terminate and recover salvage value Put option Shutting down unprofitable operations3
Option to Defer Right to delay investment until uncertainty resolves Call option Postponing a mine development amid volatile commodity prices3
Option to Stage Right to invest incrementally, like R&D phases Compound option Phased drug trials with go/no-go decisions5
Option to Contract Right to scale down operations Put option Reducing output in response to demand drops3

Valuation Approaches

ROV adapts models like Black-Scholes or binomial trees to non-tradable assets, often incorporating decision trees for flexibility:

  • NPV as baseline: Exercise if positive (e.g., forecast expansion cash flows discounted at opportunity cost)2.
  • Binomial method: Models discrete uncertainty resolution over time5.
  • Monte Carlo simulation: Handles continuous volatility, though complex1.

Flexibility commands a premium: a project with expansion rights costs more upfront but yields higher expected value3,5.

Best Related Strategy Theorist: Avinash Dixit

Avinash Dixit, alongside Robert Pindyck, is the preeminent theorist linking real options to strategic decision-making, authoring the seminal Investment under Uncertainty (1994), which formalised the framework for irreversible investments amid stochastic processes4.

Biography

Born in 1944 in Bombay (now Mumbai), India, Dixit graduated from Bombay University before earning a BA in Mathematics from Cambridge University (1963) and a PhD in Economics from Massachusetts Institute of Technology (MIT) under Paul Samuelson (1965). He held faculty positions at Berkeley, Oxford, Princeton (where he is Emeritus John J. F. Sherrerd ’52 University Professor of Economics), and the World Bank. A Fellow of the British Academy, American Academy of Arts and Sciences, and Royal Society, Dixit received the inaugural Frisch Medal (1987) and was President of the American Economic Association (2008). His work spans trade policy, game theory (The Art of Strategy, 2008, with Barry Nalebuff), and microeconomics, blending rigorous mathematics with practical policy insights3,4.

Relationship to Real Options

Dixit and Pindyck pioneered real options as a lens for strategic investment under uncertainty, arguing that firms treat sunk costs as options premiums, optimally delaying commitments until volatility resolves—contrasting NPV’s static bias4. Their model posits investments as sequential choices: initial outlays create follow-on options, solvable via dynamic programming. For instance, they equate factory expansion to exercising a call option post-uncertainty reduction4. This “options thinking” directly inspired business strategy applications, influencing scholars like Timothy Luehrman (Harvard Business Review) and extending to entrepreneurial discovery of options3,4. Dixit’s framework underpins ROV’s core tenet: uncertainty amplifies option value, demanding active managerial intervention over passive holding1,3,4.

References

1. https://www.knowcraftanalytics.com/mastering-real-options/

2. https://corporatefinanceinstitute.com/resources/derivatives/real-options/

3. https://en.wikipedia.org/wiki/Real_options_valuation

4. https://faculty.wharton.upenn.edu/wp-content/uploads/2012/05/AMR-Real-Options.pdf

5. https://www.wipo.int/web-publications/intellectual-property-valuation-in-biotechnology-and-pharmaceuticals/en/4-the-real-options-method.html

6. https://www.wallstreetoasis.com/resources/skills/valuation/real-options

7. https://analystprep.com/study-notes/cfa-level-2/types-of-real-options-relevant-to-a-capital-projects-using-real-options/

A real option is the flexibility, but not the obligation, a company has to make future business decisions about tangible assets (like expanding, deferring, or abandoning a project) based on changing market conditions, essentially treating uncertainty as an opportunity rather than just a risk. - Term: Real option

read more
Quote: Andrew Yeung

Quote: Andrew Yeung

“The first explicitly anti-AI social network will emerge. No AI-generated posts, no bots, no synthetic engagement, and proof-of-person required. People are already revolting against AI ‘slop’” – Andrew Yeung – Tech investor

Andrew Yeung: Tech Investor and Community Builder

Andrew Yeung is a prominent tech investor, entrepreneur, and events host known as the “Gatsby of Silicon Alley” by Business Insider for curating exclusive tech gatherings that draw founders, CEOs, investors, and operators.1,2,4 After 20 years in China, he moved to the U.S., leading products at Facebook and Google before pivoting to startups, investments, and community-building.2 As a partner at Next Wave NYC—a pre-seed venture fund backed by Flybridge—he has invested in over 20 early-stage companies, including Hill.com (real estate tech), Superpower (health tech), Othership (wellness), Carry (logistics), and AI-focused ventures like Natura (naturaumana.ai), Ruli (ruli.ai), Otis AI (meetotis.com), and Key (key.ai).2

Yeung hosts high-profile events through Fibe, his events company and 50,000+ member tech community, including Andrew’s Mixers (1,000+ person rooftop parties), The Junto Series (C-suite dinners), and Lumos House (multi-day mansion experiences across 8 cities like NYC, LA, Toronto, and San Francisco).1,2,4 Over 50,000 attendees, including billion-dollar founders, media figures, and Olympic athletes, have participated, with sponsors like Fidelity, J.P. Morgan, Perplexity, Silicon Valley Bank, Techstars, and Notion.2,4 His platform reaches 120,000+ tech leaders monthly and 1M+ people, aiding hundreds of founders in fundraising, hiring, and scaling.1,2 Yeung writes for Business Insider, his blog (andrew.today with 30,000+ readers), and has spoken at Princeton, Columbia Business School, SXSW, AdWeek, and Jason Calacanis’ This Week in Startups podcast on tech careers, networking, and entrepreneurship.1,2,4

Context of the Quote

The quote—”The first explicitly anti-AI social network will emerge. No AI-generated posts, no bots, no synthetic engagement, and proof-of-person required. People are already revolting against AI ‘slop’”—originates from Yeung’s newsletter post “11 Predictions for 2026 & Beyond,” published on andrew.today.3 It is prediction #9, forecasting a 2026 platform that bans AI content, bots, and fake interactions, enforcing human verification to restore authentic connections.3 Yeung cites rising backlash against AI “slop”—low-quality synthetic media—with studies showing 20%+ of YouTube recommendations for new users as such content.3 He warns of the “dead internet theory” (the idea that much online activity is bot-driven) becoming reality without human-only spaces, driven by demand for genuine interaction amid AI dominance.3

This prediction aligns with Yeung’s focus on human-centric tech: his investments blend AI tools (e.g., Otis AI, Ruli) with platforms enhancing real-world connections (e.g., events, networking advice emphasizing specific intros, follow-ups, and clarity in asks).1,2 In podcasts, he stresses high-value networking via precise value exchanges, like linking founders to niche investors, mirroring his vision for “proof-of-person” authenticity over synthetic engagement.1,4

Backstory on Leading Theorists and Concepts

The quote draws from established ideas on AI’s societal impact, particularly the Dead Internet Theory. Originating in online forums around 2021, it posits that post-2016 internet content is increasingly AI-generated, bot-amplified, and human-free, eroding authenticity—evidenced by studies like a 2024 analysis finding 20%+ of YouTube videos as low-effort AI slop, as Yeung notes.3 Key proponents include:

  • Ignas (u/illuminoATX): The pseudonymous 4chan user who formalized the theory in 2021, arguing algorithms prioritize engagement-farming bots over humans, citing examples like identical comment patterns and ghost towns on social platforms.

  • Zach Vorhies (ex-Google whistleblower): Popularized it via Twitter (now X) and interviews, analyzing YouTube’s algorithm favoring synthetic content; his 2022 claims align with Yeung’s YouTube stats.

  • Media Amplifiers: The Atlantic (2023 article “Maybe You Missed It, but the Internet Died Five Years Ago”) and New York Magazine substantiated it with data on bot proliferation (e.g., 40-50% of web traffic as bots per Imperva reports).

Related theorists on AI slop and authenticity revolts include:

  • Ethan Mollick (Wharton professor, author of Co-Intelligence): Critiques AI’s “hallucinated” mediocrity flooding culture; warns of “enshittification” (Cory Doctorow’s term for platform decay via AI spam), predicting user flight to verified-human spaces.[Inference: Mollick’s 2024 writings echo Yeung’s revolt narrative.]

  • Cory Doctorow: Coined “enshittification” (2023), describing how platforms degrade via ad-driven AI content; advocates decentralized, human-verified alternatives.

  • Jaron Lanier (VR pioneer, You Are Not a Gadget): Early critic of social media’s dehumanization; in 2024’s There Is No Antimemetics Division, pushes “humane tech” rejecting synthetic engagement.

These ideas fuel real-world responses: platforms like Bluesky and Mastodon emphasize human moderation, while proof-of-person tech (e.g., Worldcoin’s iris scans, though controversial) tests Yeung’s vision. His prediction positions him as a connector spotting unmet needs in a bot-saturated web.3

References

1. https://www.youtube.com/watch?v=uO0dI_tCvUU

2. https://www.andrewyeung.co

3. https://www.andrew.today/p/11-predictions-for-2026-and-beyond

4. https://www.youtube.com/watch?v=MdI0RhGhySI

5. https://www.andrew.today/p/my-ai-productivity-stack

“The first explicitly anti-AI social network will emerge. No AI-generated posts, no bots, no synthetic engagement, and proof-of-person required. People are already revolting against AI ‘slop’” - Quote: Andrew Yeung

read more
Term: Economic depression

Term: Economic depression

An economic depression is a severe and prolonged downturn in economic activity, markedly worse than a recession, featuring sharp contractions in production, employment, and gross domestic product (GDP), alongside soaring unemployment, plummeting incomes, widespread bankruptcies, and eroded consumer confidence, often persisting for years.1,2,3

Key Characteristics

  • Duration and Scale: Typically involves at least three consecutive years of significant economic contraction or a GDP decline exceeding 10% in a single year; unlike recessions, which span two or more quarters of negative GDP growth, depressions entail sustained, economy-wide weakness until activity nears normal levels.1,2,3
  • Economic Indicators: Real GDP falls sharply (e.g., over 10%), unemployment surges (reaching 25% in historical cases), prices and investment collapse, international trade diminishes, and poverty alongside homelessness rises; consumer spending and business investment halt due to diminished confidence.1,2,4
  • Social and Long-Term Impacts: Leads to mass layoffs, salary reductions, business failures, heavy debt burdens, rising poverty, and potential social unrest; recovery demands substantial government interventions like fiscal or monetary stimulus.1,2

Distinction from Recession

Aspect Recession Depression
Severity Milder; negative GDP for 2+ quarters Extreme; GDP drop >10% or 3+ years of contraction1,2,3
Duration Months to a year or two Several years (e.g., 1929–1939)1
Frequency Common (34 in US since 1850) Rare (one major in US history)1
Impact Reduced output, moderate unemployment Catastrophic: bankruptcies, poverty, market crashes2,4

Causes

Economic depressions arise from intertwined factors, including:

  • Banking crises, over-leveraged investments, and credit contractions.3,4
  • Declines in consumer demand and confidence, prompting production cuts.1,4
  • External shocks like stock market crashes (e.g., 1929), wars, protectionist policies, or disasters.1,2
  • Structural imbalances, such as unsustainable business practices or policy failures.1,3

The paradigmatic example is the Great Depression (1929–1939), triggered by the US stock market crash, speculative excesses, and trade barriers, resulting in a 30%+ GDP plunge, 25% unemployment, and global repercussions.1,7

Best Related Strategy Theorist: John Maynard Keynes

John Maynard Keynes (1883–1946), the preeminent theorist linked to economic depression strategy, revolutionised macroeconomics through his analysis of depressions and advocacy for active government intervention—ideas forged directly amid the Great Depression, the defining economic depression of modern history.1

Biography

Born in Cambridge, England, to economist John Neville Keynes and social reformer Florence Ada Brown, Keynes excelled at Eton and King’s College, Cambridge, studying mathematics and philosophy under Alfred Marshall. Initially a civil servant in India (1906–1908), he joined Cambridge faculty in 1909, becoming a protégé of Marshall. Keynes’s early works, like Indian Currency and Finance (1913), showcased his expertise in monetary policy. During World War I, he advised the Treasury, negotiating reparations at Versailles (1919), but resigned in protest, authoring the prophetic The Economic Consequences of the Peace (1919), warning of German hyperinflation and global instability—presciently linking punitive policies to economic downturns.

Relationship to Economic Depression

Keynes’s seminal The General Theory of Employment, Interest and Money (1936) emerged as the intellectual antidote to the Great Depression’s paralysis, challenging classical economics’ self-correcting market assumption. Observing 1929’s cascade—falling demand, idle factories, and mass unemployment—he argued depressions stem from insufficient aggregate demand, not wage rigidity alone. His strategy: governments must deploy fiscal policy—deficit spending on public works, infrastructure, and welfare—to boost demand, employment, and GDP until private confidence revives. Expressed mathematically, equilibrium output occurs where aggregate demand equals supply:

Y = C + I + G + (X - M)

Here, Y (GDP) rises via increased G (government spending) or I (investment) when private C (consumption) falters. Keynes influenced Roosevelt’s New Deal, wartime mobilisation, and postwar institutions like the IMF and World Bank, establishing Keynesianism as the orthodoxy for combating depressions until the 1970s stagflation challenged it. His framework remains central to modern counter-cyclical strategies, underscoring depressions’ preventability through policy.1,2

References

1. https://study.com/academy/lesson/economic-depression-overview-examples.html

2. https://www.britannica.com/money/depression-economics

3. https://en.wikipedia.org/wiki/Economic_depression

4. https://corporatefinanceinstitute.com/resources/economics/economic-depression/

5. https://www.imf.org/external/pubs/ft/fandd/basics/recess.htm

6. https://www.frbsf.org/research-and-insights/publications/doctor-econ/2007/02/recession-depression-difference/

7. https://www.fdrlibrary.org/great-depression-facts

An economic depression is a severe, long-term downturn in economic activity, far worse than a typical recession, characterised by deep contractions in production, high unemployment, falling incomes, and collapsed consumer confidence, often lasting several years or more. - Term: Economic depression

read more
Quote: Kazuo Ishiguro

Quote: Kazuo Ishiguro

“Perhaps, then, there is something to his advice that I should cease looking back so much, that I should adopt a more positive outlook and try to make the best of what remains of my day.” – Kazuo Ishiguro – The Remains of the Day

Context of the Quote in The Remains of the Day

The quote—“Perhaps, then, there is something to his advice that I should cease looking back so much, that I should adopt a more positive outlook and try to make the best of what remains of my day”—appears toward the novel’s conclusion, spoken by the protagonist, Stevens, a stoic English butler reflecting on his life during a road trip across 1950s England.2,3 It captures Stevens grappling with regret over suppressed emotions, unrequited love for housekeeper Miss Kenton, and blind loyalty to his former employer, Lord Darlington, whose pro-appeasement stance toward Nazi Germany tainted his legacy. The “advice” comes from a genial stranger at a pier, who urges Stevens to enjoy life’s “evening” after a day’s work, echoing the novel’s titular metaphor of time slipping away like a fading day.2,3,4 This moment marks Stevens’s tentative shift from rigid self-denial toward acceptance, though his ingrained dignity—defined as unflinching duty—prevents full emotional release.1,2

Backstory on Kazuo Ishiguro and the Novel

Kazuo Ishiguro, born in 1954 in Nagasaki, Japan, moved to England at age five, shaping his themes of memory, displacement, and unspoken regret. A Nobel Prize winner in Literature (2017), he crafts subtle narratives blending historical realism with psychological depth, as in The Remains of the Day (1989), his third novel and Booker Prize victor.2 Inspired by unreliable narrators like those in Ford Madox Ford’s works, Ishiguro drew from real English butlers’ memoirs and interwar politics, critiquing class-bound repression without overt judgment. The Booker-winning story follows Stevens’s six-day drive to reunite with Miss Kenton, framed as his self-justifying memoir, exposing how duty stifles personal fulfillment amid 1930s fascism’s rise.1,2,4 Adapted into a 1993 Oscar-nominated film starring Anthony Hopkins and Emma Thompson, it remains Ishiguro’s most acclaimed work, probing what dignity is there in that?—a line underscoring Stevens’s crisis.2

Leading Theorists on Regret, Positive Outlook, and the “Remains of the Day”

The quote’s pivot from backward-glancing remorse to forward optimism ties into psychological and philosophical theories on regret minimization and temporal orientation. Key figures include:

  • Daniel Kahneman and Amos Tversky (Prospect Theory pioneers, Nobel in Economics 2002): Their work shows regret stems from inaction (e.g., Stevens’s unlived life with Miss Kenton), amplified by hindsight bias—recognizing “turning points” only retrospectively, as Stevens laments: What can we ever gain in forever looking back?2 They advocate shifting focus to future gains for emotional resilience.

  • Daniel Gilbert (Stumbling on Happiness, 2006): Gilbert’s research reveals humans overestimate past regrets while underestimating future adaptation; he posits adopting a “positive outlook” via affective forecasting—imagining better “remains” ahead—mirrors the stranger’s counsel to “put your feet up and enjoy it.”2,3 Stevens embodies Gilbert’s “impact bias,” where unaddressed regrets loom larger in memory.

  • Martin Seligman (Positive Psychology founder): Seligman’s learned optimism counters Stevens’s pessimism, urging reframing via gratitude: You must realize one has as good as most… and be grateful.1 His PERMA model (Positive Emotion, Engagement, Relationships, Meaning, Accomplishment) critiques duty-bound lives, aligning with Stevens’s late epiphany to “make the best of what remains.”

  • Viktor Frankl (Man’s Search for Meaning, 1946): A Holocaust survivor, Frankl’s logotherapy emphasizes finding meaning in suffering; Stevens’s arc echoes Frankl’s call to transcend regret through present purpose, rejecting endless rumination: There is little choice other than to leave our fate… in the hands of those great gentlemen.2

  • Epictetus and Stoic Philosophers: Ancient roots in Stevens’s dignity ideal; Epictetus advised focusing on controllables (one’s outlook) over uncontrollables (past choices), prefiguring the quote’s resolve amid life’s “evening.”1,2

These theorists illuminate the novel’s insight: regret poisons the “remains,” but a deliberate positive turn fosters redemption, blending empirical psychology with timeless wisdom.1,2,3

References

1. https://www.bookey.app/book/the-remains-of-the-day/quote

2. https://www.goodreads.com/work/quotes/3333111-the-remains-of-the-day

3. https://www.goodreads.com/work/quotes/3333111-the-remains-of-the-day?page=6

4. https://www.siquanong.com/book-summaries/the-remains-of-the-day/

5. https://bookroo.com/quotes/the-remains-of-the-day

6. https://www.sparknotes.com/lit/remains/quotes/page/2/

7. https://www.coursehero.com/lit/The-Remains-of-the-Day/quotes/

8. https://www.litcharts.com/lit/the-remains-of-the-day/quotes

9. https://www.cliffsnotes.com/literature/the-remains-of-the-day/quotes

10. https://www.sparknotes.com/lit/remains/quotes/

“Perhaps, then, there is something to his advice that I should cease looking back so much, that I should adopt a more positive outlook and try to make the best of what remains of my day.” - Quote: Kazuo Ishiguro

read more
Quote: Blackrock

Quote: Blackrock

“The AI builders are leveraging up: investment is front-loaded while revenues are back-loaded. Along with highly indebted governments, this creates a more levered financial system vulnerable to shocks like bond yield spikes.” – Blackrock – 2026 Outlook

The AI Financing Paradox: How Front-Loaded Investment and Back-Loaded Returns are Reshaping Global Financial Risk

The Quote in Context

BlackRock’s 2026 Investment Outlook identifies a critical structural vulnerability in global markets: the massive capital requirements of AI infrastructure are arriving years before the revenue benefits materialize1. This temporal mismatch creates what the firm describes as a financing “hump”—a period of intense leverage accumulation across both the private sector and government balance sheets, leaving financial systems exposed to potential shocks from rising bond yields or credit market disruptions1,2.

The quote reflects BlackRock’s core thesis that AI’s economic impact will be transformational, but the path to that transformation is fraught with near-term financial risks. As the world’s largest asset manager, overseeing nearly $14 trillion in assets, BlackRock’s assessment carries significant weight in shaping investment strategy and market expectations3.

The Investment Spend-Revenue Gap

The scale of the AI buildout is staggering. BlackRock projects $5-8 trillion in AI-related capital expenditure through 20305, with annual spending estimated at 5-8 trillion dollars globally until that date3. This represents the fastest technological buildout in recent centuries, yet the economics are unconventional: companies are committing enormous capital today with the expectation that productivity gains and revenue growth will materialize later2.

BlackRock notes that while the overall revenues AI eventually generates could theoretically justify the spending at a macroeconomic level, it remains unclear how much of that value will accrue to the tech companies actually building the infrastructure1,2. This uncertainty creates a critical vulnerability—if AI deployment proves less profitable than anticipated, or if adoption rates slow, highly leveraged companies may struggle to service their debt obligations.

The Leverage Imperative

The financing structure is not optional; it is inevitable. AI spending necessarily precedes benefits and revenues, creating an unavoidable need for long-term financing and greater leverage2. Tech companies and infrastructure providers cannot wait years to recoup their investments—they must borrow in capital markets today to fund construction, equipment, and operations.

This creates a second layer of risk. As companies issue bonds to finance AI capex, they increase corporate debt levels. Simultaneously, governments worldwide remain highly indebted from pandemic stimulus and ongoing fiscal pressures. The combination produces what BlackRock identifies as a “more levered financial system”—one where both public and private sector balance sheets are stretched1.

The Vulnerability to Shocks

BlackRock’s warning about vulnerability to “shocks like bond yield spikes” is particularly prescient. In a highly leveraged environment, rising interest rates have cascading effects:

  • Refinancing costs increase: Companies and governments face higher borrowing costs when existing bonds mature and must be renewed.
  • Debt service burden rises: Higher yields directly increase the cost of servicing existing debt, reducing profitability and fiscal flexibility.
  • Credit spreads widen: Investors demand higher risk premiums, making debt more expensive across the board.
  • Forced deleveraging: Companies unable to service debt at higher rates may need to cut spending, sell assets, or restructure obligations.

The AI buildout amplifies this risk because so much spending is front-loaded. If yield spikes occur before significant productivity gains materialize, companies may lack the cash flow to manage higher borrowing costs, creating potential defaults or forced asset sales that could trigger broader financial instability.

BlackRock’s Strategic Response

Rather than abandoning risk, BlackRock has taken a nuanced approach: the firm remains pro-risk and overweight U.S. stocks on the AI theme1, betting that the long-term benefits will justify near-term leverage accumulation. However, the firm has also shifted toward tactical underweighting of long-term Treasuries and identified opportunities in both public and private credit markets to manage risk while maintaining exposure1.

This reflects a sophisticated view: the financial system’s increased leverage is a real concern, but the AI opportunity is too significant to avoid. Instead, active management and diversification across asset classes become essential.

Broader Economic Context

The leverage dynamic intersects with broader macroeconomic shifts. BlackRock emphasizes that inflation is no longer the central issue driving markets; instead, labor dynamics and the distributional effects of AI now matter more4. The firm projects that AI could generate roughly $1.2 trillion in annual labor cost savings, translating into about $878 billion in incremental after-tax corporate profits each year, with a present value on the order of $82 trillion for corporations and another $27 trillion for AI providers4.

These enormous potential gains justify the current spending—on a macro level. Yet for individual investors and companies, dispersion and default risk are rising4. The benefits of AI will be highly concentrated among successful implementers, while laggards face obsolescence. This uneven distribution of gains and losses adds another layer of risk to a more levered financial system.

Historical and Theoretical Parallels

The AI financing paradox echoes historical technology cycles. During the dot-com boom of the late 1990s, massive capital investment in internet infrastructure preceded revenue generation by years, creating similar leverage vulnerabilities. The subsequent crash revealed how vulnerable highly leveraged systems are to disappointment about future growth rates.

However, this cycle differs in scale and maturity. Unlike the dot-com era, AI is already demonstrating productivity benefits across multiple sectors. The question is not whether AI creates value, but whether the timeline and magnitude of value creation justify the financial risks being taken today.


BlackRock’s insight captures a fundamental tension in modern finance: transformative technological change requires enormous upfront capital, yet highly leveraged financial systems are fragile. The path forward depends on whether productivity gains materialize quickly enough to validate the investment and reduce leverage before external shocks test the system’s resilience.

References

1. https://www.blackrock.com/americas-offshore/en/insights/blackrock-investment-institute/outlook

2. https://www.youtube.com/watch?v=eFBwyu30oTU

3. https://www.youtube.com/watch?v=Ww7Zy3MAWAs

4. https://www.blackrock.com/us/financial-professionals/insights/investing-in-2026

5. https://www.blackrock.com/us/financial-professionals/insights/ai-stocks-alternatives-and-the-new-market-playbook-for-2026

6. https://www.blackrock.com/corporate/insights/blackrock-investment-institute/publications/outlook

7. https://www.blackrock.com/institutions/en-us/insights/2026-macro-outlook

read more
Term: Economic recession

Term: Economic recession

An economic recession is a significant, widespread downturn in economic activity, characterized by declining real GDP (often two consecutive quarters), rising unemployment, falling retail sales, and reduced business/consumer spending, signaling a contraction in the business cycle. – Economic recession

Economic Recession

1,2

Definition and Measurement

Different jurisdictions employ distinct formal definitions. In the United Kingdom and European Union, a recession is defined as negative economic growth for two consecutive quarters, representing a six-month period of falling national output and income.1,2 The United States employs a more comprehensive approach through the National Bureau of Economic Research (NBER), which examines a broad range of economic indicators—including real GDP, real income, employment, industrial production, and wholesale-retail sales—to determine whether a significant decline in economic activity has occurred, considering its duration, depth, and diffusion across the economy.1,2

The Organisation for Economic Co-operation and Development (OECD) defines a recession as a period of at least two years during which the cumulative output gap reaches at least 2% of GDP, with the output gap remaining at least 1% for a minimum of one year.2

Key Characteristics

Recessions typically exhibit several defining features:

  • Duration: Most recessions last approximately one year, though this varies significantly.4
  • Output contraction: A typical recession involves a GDP decline of around 2%, whilst severe recessions may see output costs approaching 5%.4
  • Employment impact: The unemployment rate almost invariably rises during recessions, with layoffs becoming increasingly common and wage growth slowing or stagnating.2
  • Consumer behaviour: Consumption declines occur, often accompanied by shifts toward lower-cost generic brands as discretionary income diminishes.2
  • Investment reduction: Industrial production and business investment register much larger declines than GDP itself.4
  • Financial disruption: Recessions typically involve turmoil in financial markets, erosion of house and equity values, and potential credit tightening that restricts borrowing for both consumers and businesses.4
  • International trade: Exports and imports fall sharply during recessions.4
  • Inflation modereration: Overall demand for goods and services contracts, causing inflation to fall slightly or, in deflationary recessions, to become negative with prices declining.1,4

Causes and Triggers

Recessions generally stem from market imbalances, triggered by external shocks or structural economic weaknesses.8 Common precipitating factors include:

  • Excessive household debt accumulation followed by difficulties in meeting obligations, prompting consumers to reduce spending.2
  • Rapid credit expansion followed by credit tightening (credit crunches), which restricts the availability of borrowing for consumers and businesses.2
  • Rising material and labour costs prompting businesses to increase prices; when central banks respond by raising interest rates, higher borrowing costs discourage business investment and consumer spending.5
  • Declining consumer confidence manifesting in falling retail sales and reduced business investment.2

Distinction from Depression

A depression represents a severe or prolonged recession. Whilst no universally agreed definition exists, a depression typically involves a GDP fall of 10% or more, a GDP decline persisting for over three years, or unemployment exceeding 20%.1 The informal economist’s observation captures this distinction: “It’s a recession when your neighbour loses his job; it’s a depression when you lose yours.”1

Policy Response

Governments typically respond to recessions through expansionary macroeconomic policies, including increasing money supply, decreasing interest rates, raising government spending, and reducing taxation, to stimulate economic activity and restore growth.2


Related Strategy Theorist: John Maynard Keynes

John Maynard Keynes (1883–1946) stands as the preeminent theorist whose work fundamentally shaped modern understanding of recessions and the policy responses to them.

Biography and Context

Born in Cambridge, England, Keynes was an exceptionally gifted economist, mathematician, and public intellectual. After studying mathematics at King’s College, Cambridge, he pivoted to economics and became a fellow of the college in 1909. His early career included service with the Indian Civil Service and as an editor of the Economic Journal, Britain’s leading economics publication.

Keynes’ formative professional experience came as the chief representative of the British Treasury at the Paris Peace Conference in 1919 following the First World War. Disturbed by the punitive reparations imposed upon Germany, he resigned and published The Economic Consequences of the Peace (1919), which warned prophetically of economic instability resulting from the treaty’s harsh terms. This work established his reputation as both economist and public commentator.

Relationship to Recession Theory

Keynes’ revolutionary contribution emerged with the publication of The General Theory of Employment, Interest and Money (1936), written during the Great Depression. His work fundamentally challenged the prevailing classical economic orthodoxy, which held that markets naturally self-correct and unemployment represents a temporary frictional phenomenon.

Keynes demonstrated that recessions and prolonged unemployment result from insufficient aggregate demand rather than labour market rigidities or individual irresponsibility.C + I + G + (X - M) = Y, where aggregate demand (the sum of consumption, investment, government spending, and net exports) determines total output and employment. During recessions, demand contracts—consumers and businesses reduce spending due to uncertainty and falling incomes—creating a self-reinforcing downward spiral that markets alone cannot reverse.

This insight proved revolutionary because it legitimised active government intervention in recessions. Rather than viewing recessions as inevitable and self-correcting phenomena to be endured passively, Keynes argued that governments could and should employ fiscal policy (taxation and spending) and monetary authorities could adjust interest rates to stimulate aggregate demand, thereby shortening recessions and reducing unemployment.

His framework directly underpinned the post-war consensus on recession management: expansionary monetary and fiscal policies during downturns to restore demand and employment. The modern definition of recession as a statistical phenomenon (two consecutive quarters of negative GDP growth) emerged from Keynesian economics’ focus on output and demand as the central drivers of economic cycles.

Keynes’ influence extended beyond economic theory into practical policy. His ideas shaped the institutional architecture of the post-1945 international economic order, including the International Monetary Fund and World Bank, both conceived to prevent the catastrophic demand collapse that characterised the 1930s.

References

1. https://www.economicshelp.org/blog/459/economics/define-recession/

2. https://en.wikipedia.org/wiki/Recession

3. https://den.mercer.edu/what-is-a-recession-and-is-the-u-s-in-one-mercer-economists-explain/

4. https://www.imf.org/external/pubs/ft/fandd/basics/recess.htm

5. https://www.fidelity.com/learning-center/smart-money/what-is-a-recession

6. https://www.congress.gov/crs-product/IF12774

7. https://www.munich-business-school.de/en/l/business-studies-dictionary/financial-knowledge/recession

8. https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-a-recession

An economic recession is a significant, widespread downturn in economic activity, characterized by declining real GDP (often two consecutive quarters), rising unemployment, falling retail sales, and reduced business/consumer spending, signaling a contraction in the business cycle. - Term: Economic recession

read more
Quote: William Makepeace Thackeray – English novelist

Quote: William Makepeace Thackeray – English novelist

The world is a looking-glass, and gives back to every man the reflection of his own face. Frown at it, and it will in turn look sourly upon you; laugh at it and with it, and it is a jolly kind companion; and so let all young persons take their choice. – William Makepeace Thackeray – English novelist

The Quote

Context of the Quote

This passage appears in William Makepeace Thackeray’s seminal novel Vanity Fair: A Novel Without a Hero (serialized 1847–1848), during a narrative reflection on human behavior and perception13. It occurs amid commentary on a young character’s misanthropic outlook, where the narrator observes that people who view the world harshly often receive harshness in return, attributing this to self-projection rather than external reality3. The metaphor of the world as a “looking-glass” (an old term for mirror) underscores the novel’s core theme of vanity—how personal attitudes shape social interactions in a superficial, reciprocal society13. Thackeray uses it to advise youth to choose optimism, contrasting it with the book’s satirical portrayal of ambition, deceit, and social climbing in early 19th-century England3.

Backstory on William Makepeace Thackeray

William Makepeace Thackeray (1811–1863) was a prominent English novelist, satirist, and illustrator, often ranked alongside Charles Dickens as a Victorian literary giant1. Born in Calcutta, India, to British parents—his father a colonial administrator—he returned to England at age six after his father’s early death1. Educated at Charterhouse School and Cambridge University, Thackeray initially pursued law and art but turned to journalism and writing amid financial ruin from failed investments and his wife’s mental illness following childbirth1.

His breakthrough came with Vanity Fair, a panoramic satire of British society during the Napoleonic Wars, drawing from John Bunyan’s The Pilgrim’s Progress (where “Vanity Fair” symbolizes worldly temptation)13. Published anonymously as monthly installments, it sold widely for its witty narration, moral ambiguity, and critique of hypocrisy among the upper and aspiring middle classes1. Thackeray followed with successes like Pendennis (1848–1850), Henry Esmond (1852), and The Newcomes (1853–1855), blending humor, pathos, and realism1. A rival to Dickens, he lectured on English humorists and edited Cornhill Magazine, but personal struggles with debt, health (addiction to opium and alcohol), and family tragedy marked his life. He died at 52 from a ruptured aneurysm1.

Thackeray’s style—omniscient, ironic narration—mirrors the quote’s philosophy: life reflects one’s inner disposition, a recurring motif in his works exposing human folly without heavy moralizing13.

Leading Theorists Related to the Subject Matter

The quote’s idea—that reality mirrors one’s attitude—echoes longstanding philosophical and psychological concepts on perception, projection, and optimism. Below is a backstory on key theorists whose ideas parallel or influenced this theme of reciprocal self-fulfilling prophecy.

  • Baruch Spinoza (1632–1677): Dutch philosopher whose Ethics (1677) posits that emotions like hope or fear shape how we interpret the world, creating self-reinforcing cycles. He argued humans project passions onto external events, much like Thackeray’s “looking-glass,” advocating rational optimism to alter perception[supplemental knowledge, aligned with Thackeray’s era].

  • Immanuel Kant (1724–1804): German idealist in Critique of Pure Reason (1781) who theorized that the mind imposes structure on sensory experience—our “face” colors reality. This subjective lens prefigures Thackeray’s mirror metaphor, influencing 19th-century Romantic views on personal agency in shaping fate.

  • William James (1842–1910): American pragmatist and psychologist, contemporary to Thackeray’s later influence, in The Principles of Psychology (1890) described the “self-fulfilling prophecy” where expectations elicit confirming behaviors from others. His optimism essays echo the quote’s call to “laugh at it,” linking mindset to social outcomes.

  • Norman Vincent Peale (1898–1993): 20th-century popularizer of positive thinking in The Power of Positive Thinking (1952), directly inverting frowns/smiles to transform life experiences—a modern extension of Thackeray’s advice, rooted in psychological projection.

  • Cognitive Behavioral Theorists (e.g., Aaron Beck, 1921–2021): Beck’s cognitive therapy (1960s onward) formalized cognitive distortions, where negative schemas (like frowning at the world) perpetuate sour outcomes, supported by empirical studies on attribution bias and reciprocity in social psychology.

These ideas trace from Enlightenment rationalism through Victorian literature to modern psychology, all converging on the insight that personal disposition acts as a filter and catalyst for worldly responses, as Thackeray insightfully captured13.

References

1. https://www.goodreads.com/author/quotes/3953.William_Makepeace_Thackeray

2. https://www.azquotes.com/author/14547-William_Makepeace_Thackeray

3. https://www.goodreads.com/work/quotes/1057468-vanity-fair-a-novel-without-a-hero

4. https://www.sparknotes.com/lit/vanity-fair/quotes/

5. https://www.coursehero.com/lit/Vanity-Fair/quotes/

6. http://www.freebooknotes.com/quotes/vanity-fair/

7. https://libquotes.com/william-makepeace-thackeray/works/vanity-fair

8. https://www.litcharts.com/lit/vanity-fair/quotes

The world is a looking-glass, and gives back to every man the reflection of his own face. Frown at it, and it will in turn look sourly upon you; laugh at it and with it, and it is a jolly kind companion; and so let all young persons take their choice. - Quote: William Makepeace Thackeray - English novelist

read more
Quote: Milton Friedman – Nobel laureate

Quote: Milton Friedman – Nobel laureate

“One of the great mistakes is to judge policies and programs by their intentions rather than their results.” – Milton Friedman – Nobel laureate

1

Context and Origin

Milton Friedman first expressed this idea during a 1975 television interview on The Open Mind, hosted by Richard Heffner. Discussing government programs aimed at helping the poor and needy, Friedman argued that such initiatives, despite their benevolent intentions, often produce opposite effects. He tied the remark to the proverb “the road to hell is paved with good intentions,” emphasizing that good-hearted advocates sometimes fail to apply the same rigor to their heads, leading to unintended harm1. The quote has since appeared in books like After the Software Wars (2009) and I Am John Galt (2011), a 2024 New York Times letter critiquing the Department of Education, and various quote collections13.

This perspective underscores Friedman’s broader critique of public policy: evaluate effectiveness through empirical outcomes, not rhetoric. He often highlighted how welfare programs, school vouchers, and monetary policies could backfire if results are ignored in favor of motives14.

Backstory on Milton Friedman

Milton Friedman (1912–2006) was a pioneering American economist, statistician, and public intellectual whose work reshaped modern economic thought. Born in Brooklyn, New York, to Jewish immigrant parents from Hungary, he earned his bachelor’s degree from Rutgers University in 1932 amid the Great Depression, followed by master’s and doctoral degrees from the University of Chicago. There, he joined the “Chicago School” of economics, advocating free markets, limited government, and individual liberty1.

Friedman’s seminal contributions include A Monetary History of the United States (1963, co-authored with Anna Schwartz), which blamed the Federal Reserve’s policies for exacerbating the Great Depression and influenced central banking worldwide. His advocacy for floating exchange rates contributed to the end of the Bretton Woods system in 1971. In Capitalism and Freedom (1962), he proposed ideas like school vouchers, a negative income tax, and abolishing the draft—many of which remain debated today.

A fierce critic of Keynesian economics, Friedman championed monetarism: the idea that controlling money supply stabilizes economies better than fiscal intervention. His PBS series Free to Choose (1980) and bestselling book of the same name popularized these views for lay audiences. Awarded the Nobel Prize in Economic Sciences in 1976 “for his achievements in the fields of consumption analysis, monetary history and theory, and for his demonstration of the complexity of stabilization policy,” Friedman influenced leaders like Ronald Reagan and Margaret Thatcher1.

Later, he opposed the war on drugs, supported drug legalization, and critiqued Social Security. Friedman died in 2006, leaving a legacy as a defender of economic freedom against well-intentioned but flawed interventions.

Leading Theorists Related to the Subject Matter

Friedman’s quote critiques the “intention fallacy” in policy evaluation, aligning with traditions emphasizing empirical results over moral or ideological justifications. Key related theorists include:

  • Friedrich Hayek (1899–1992): Austrian-British economist and Nobel laureate (1974). In The Road to Serfdom (1944), Hayek warned that central planning, even with good intentions, leads to unintended tyranny due to knowledge limits in society. He influenced Friedman via the Mont Pelerin Society (founded 1947), stressing spontaneous order and market signals over planners’ designs1.

  • James M. Buchanan (1919–2013): Nobel laureate (1986) in public choice theory. With Gordon Tullock in The Calculus of Consent (1962), he modeled politicians and bureaucrats as self-interested actors, explaining why “public interest” policies produce perverse results like pork-barrel spending. This countered naive views of benevolent government1.

  • Gary Becker (1930–2014): Chicago School Nobel laureate (1992). Extended economic analysis to non-market behavior (e.g., crime, family) in Human Capital (1964), showing policies must be judged by incentives and outcomes, not intent. Becker quantified how regulations distort behaviors, echoing Friedman’s results focus1.

  • John Maynard Keynes (1883–1946): Counterpoint theorist. In The General Theory (1936), Keynes advocated government intervention for demand management, prioritizing intentions to combat unemployment. Friedman challenged this empirically, arguing it caused 1970s stagflation1.

These thinkers form the backbone of outcome-based policy critique, contrasting with interventionist schools like Keynesianism, where intentions often justify expansions despite mixed results.

Friedman’s Permanent Income Hypothesis

Linked in some discussions to Friedman’s consumption work, the Permanent Income Hypothesis (1957) posits that people base spending on “permanent” (long-term expected) income, not short-term fluctuations. In A Theory of the Consumption Function, Friedman argued transitory income changes (e.g., bonuses) are saved, not spent, challenging Keynesian absolute income hypothesis. Empirical tests via microdata supported it, influencing modern macroeconomics and fiscal policy debates on multipliers1. This hypothesis exemplifies Friedman’s results-driven approach: policies assuming instant spending boosts (e.g., stimulus checks) overlook consumption smoothing.

References

1. https://quoteinvestigator.com/2024/03/22/intentions-results/

2. https://www.azquotes.com/quote/351907

3. https://www.goodreads.com/quotes/29902-one-of-the-great-mistakes-is-to-judge-policies-and

4. https://www.americanexperiment.org/milton-friedman-judge-public-policies-by-their-results-not-their-intentions/

One of the great mistakes is to judge policies and programs by their intentions rather than their results. - Quote: Milton Friedman - Nobel laureate

read more
Term: Alpha

Term: Alpha

1,2,3,5

Comprehensive Definition

Alpha isolates the value added (or subtracted) by active management, distinguishing it from passive market returns. It quantifies performance on a risk-adjusted basis, accounting for systematic risk via beta, which reflects an asset’s volatility relative to the market. A positive alpha signals outperformance—meaning the manager has skilfully selected securities or timed markets to exceed expectations—while a negative alpha indicates underperformance, often failing to justify management fees.1,3,4,5 An alpha of zero implies returns precisely match the risk-adjusted benchmark.3,5

In practice, alpha applies across asset classes:

  • Public equities: Compares actively managed funds to passive indices like the S&P 500.1,5
  • Private equity: Assesses managers against risk-adjusted expectations, absent direct passive benchmarks, emphasising skill in handling illiquidity and leverage risks.1

Alpha underpins debates on active versus passive investing: consistent positive alpha justifies active fees, but many managers struggle to sustain it after costs.1,4

Calculation Methods

The simplest form subtracts benchmark return from portfolio return:

  • Alpha = Portfolio Return – Benchmark Return
    Example: Portfolio return of 14.8% minus benchmark of 11.2% yields alpha = 3.6%.1

For precision, Jensen’s Alpha uses the Capital Asset Pricing Model (CAPM) to compute expected return:
\alpha = R<em>p - [R</em>f + \beta (R<em>m - R</em>f)]
Where:

  • ( R_p ): Portfolio return
  • ( R_f ): Risk-free rate (e.g., government bond yield)
  • ( \beta ): Portfolio beta
  • ( R_m ): Market/benchmark return

Example: ( Rp = 30\% ), ( Rf = 8\% ), ( \beta = 1.1 ), ( R_m = 20\% ) gives:
\alpha = 0.30 - [0.08 + 1.1(0.20 - 0.08)] = 0.30 - 0.214 = 0.086 \ (8.6\%)3,4

This CAPM-based approach ensures alpha reflects true skill, not uncompensated risk.1,2,5

Key Theorist: Michael Jensen

The foremost theorist linked to alpha is Michael Jensen (1939–2021), who formalised Jensen’s Alpha in his seminal 1968 paper, “The Performance of Mutual Funds in the Period 1945–1964,” published in the Journal of Finance. This work introduced alpha as a rigorous metric within CAPM, enabling empirical tests of manager skill.1,4

Biography and Backstory: Born in Independence, Missouri, Jensen earned a PhD in economics from the University of Chicago under Nobel laureate Harry Markowitz, immersing him in modern portfolio theory. His 1968 study analysed 115 mutual funds, finding most generated negative alpha after fees, challenging claims of widespread managerial prowess and bolstering efficient market hypothesis evidence.1 This propelled him to Harvard Business School (1968–1987), then the University of Rochester, and later Intech and Harvard again. Jensen pioneered agency theory, co-authoring “Theory of the Firm” (1976) on managerial incentives, and influenced private equity via leveraged buyouts. His alpha measure remains foundational, used daily by investors to evaluate funds against CAPM benchmarks, underscoring that true alpha stems from security selection or timing, not market beta.1,4,5 Jensen’s legacy endures in performance attribution, with his metric cited in trillions of dollars’ worth of evaluations.

References

1. https://www.moonfare.com/glossary/investment-alpha

2. https://robinhood.com/us/en/learn/articles/2lwYjCxcvUP4lcqQ3yXrgz/what-is-alpha/

3. https://corporatefinanceinstitute.com/resources/career-map/sell-side/capital-markets/alpha/

4. https://www.wallstreetprep.com/knowledge/alpha/

5. https://www.findex.se/finance-terms/alpha

6. https://www.ig.com/uk/glossary-trading-terms/alpha-definition

7. https://www.pimco.com/us/en/insights/the-alpha-equation-myths-and-realities

8. https://eqtgroup.com/thinq/Education/what-is-alpha-in-investing

Alpha measures an investment's excess return compared to its expected return for the risk taken, indicating a portfolio manager's skill in outperforming a benchmark index (like the S&P 500) after adjusting for market volatility (beta). - Term: Alpha

read more

Download brochure

Introduction brochure

What we do, case studies and profiles of some of our amazing team.

Download

Our latest podcasts on Spotify

Sign up for our newsletters - free

Global Advisors | Quantified Strategy Consulting