| |
|
A daily bite-size selection of top business content.
PM edition. Issue number 1197
Latest 10 stories. Click the button for more.
|
| |
"Google, OpenAI and Amazon all are racing to create tools that would allow for seamless AI-powered shopping." - Associated Press
When the Associated Press observes that "Google, OpenAI and Amazon all are racing to create tools that would allow for seamless AI-powered shopping", it is capturing a pivotal moment in the evolution of retail and of the internet itself. The quote sits at the intersection of several long-running trends: the shift from search to conversation, from static websites to intelligent agents, and from one-size-fits-all retail to deeply personalised, data-driven commerce.
Behind this single sentence lies a complex story of technological breakthroughs, strategic rivalry between the worlds largest technology platforms, and a reimagining of how people discover, evaluate and buy what they need. It also reflects the culmination of decades of research in artificial intelligence, recommendation systems, human-computer interaction and digital economics.
The immediate context: AI agents meet the shopping basket
The Associated Press line comes against the backdrop of a wave of partnerships between AI platforms and major retailers. Google has been integrating its Gemini AI assistant with large retail partners such as Walmart and Sams Club, allowing users to move from a conversational query directly to tailored product recommendations and frictionless checkout.
Instead of typing a product name into a search bar, a shopper can describe a situation or a goal, such as planning a camping trip or furnishing a first flat. Gemini then uses natural language understanding and retailer catalogues to surface relevant items, combine them into coherent baskets and arrange rapid delivery, in some cases within hours.1,3 The experience is meant to feel less like using a website and more like speaking to a highly knowledgeable personal shopper.
Walmart leaders have described this shift as a move from traditional search-based ecommerce to what they call "agent-led commerce" - shopping journeys mediated not by menus and filters but by AI agents that understand intent, context and personal history.1,2,3 For Google, this integration is both a way to showcase the capabilities of its Gemini models and a strategic response to OpenAIs work with retailers like Walmart, Etsy and a wide range of Shopify merchants through tools such as Instant Checkout.2,3
OpenAI, in parallel, has enabled users to browse and buy directly within ChatGPT, turning the chatbot into a commercial surface as well as an information tool.2,3 Amazon, for its part, has been weaving generative AI into its core marketplace, logistics and voice assistant, using AI models to improve product discovery, summarise reviews, optimise pricing and automate seller operations. Each company is betting that the next era of retail will be shaped by AI agents that can orchestrate entire end-to-end journeys from inspiration to doorstep.
From web search to agentic commerce
The core idea behind "seamless AI-powered shopping" is the replacement of fragmented, multi-step customer journeys with coherent, adaptive experiences guided by AI agents. Historically, online shopping has been built around search boxes, category trees and static product pages. The burden has been on the consumer to know what they want, translate that into search terms, sift through results and manually assemble baskets.
Agentic commerce reverses this burden. The AI system becomes an active participant: interpreting vague goals, proposing options, remembering preferences, coordinating logistics and handling payments, often across multiple merchants. Google and OpenAI have both underpinned their efforts with new open protocols designed to let AI agents communicate with a wide ecosystem of retailers, payment providers and loyalty systems.3,5
Google refers to its initiative as a Universal Commerce Protocol and describes it as a new standard that allows agents and systems to talk to each other across each step of the shopping journey.3,5 OpenAI, in turn, introduced the Agentic Commerce Protocol in partnership with Stripe, enabling ChatGPT and other agents to complete purchases from Etsy and millions of Shopify merchants.3 The technical details differ, but the strategic goal is shared: create an infrastructure layer that allows any capable AI agent to act as a universal shopping front end.
In practice, this means that a single conversation might involve discovering a new product, joining a retailers loyalty scheme, receiving personalised offers, adding related items and completing payment - without ever visiting a conventional website or app. The Associated Press quote calls out the intensity of the competition between the major platforms to control this new terrain.
The Associated Press as observer and interpreter
The Associated Press (AP), the attributed source of the quote, has a distinctive role in this story. Founded in 1846, AP is one of the worlds oldest and most widely used news agencies. It operates as a non-profit cooperative, producing reporting that is syndicated globally and used as a baseline for coverage by broadcasters, newspapers and digital platforms.
AP has long been known for its emphasis on factual, neutral reporting, and over the past decade it has also become notable for its early adoption of AI in news production. It has experimented with automated generation of corporate earnings summaries, sports briefs and other data-heavy stories, while also engaging in partnerships with technology companies around synthetic media and content labelling.
By framing the competition between Google, OpenAI and Amazon as a "race" to build seamless AI shopping, AP is doing more than simply documenting product launches. It is drawing attention to the structural stakes: the question of who will mediate the everyday economic decisions of billions of people. APs wording underscores both the speed of innovation and the concentration of power in a handful of technology giants.
APs technology and business correspondents, in covering this domain, typically triangulate between company announcements, analyst commentary and academic work on AI and markets. The quote reflects that blend: it is rooted in concrete developments such as the integration of Gemini with major retailers and the emergence of new commerce protocols, but it also hints at broader theoretical debates about platforms, data and consumer autonomy.
Intellectual roots: from recommendation engines to intelligent agents
The idea of seamless, AI-mediated shopping is the visible tip of an intellectual iceberg that stretches back decades. Several overlapping fields contribute to the current moment: information retrieval, recommender systems, multi-sided platforms, behavioural economics and conversational AI. The leading theorists in these areas laid the groundwork for the systems now shaping retail.
Search and information retrieval
Long before conversational agents, the central challenge of online commerce was helping people find relevant items within vast catalogues. Researchers in information retrieval, such as Gerard Salton in the 1960s and 1970s, developed foundational models for document ranking and term weighting that later underpinned web search.
In the context of commerce, the key innovation was the integration of relevance ranking with commercial signals such as click-through rates, purchase behaviour and sponsored listings. Googles original PageRank algorithm, associated with Larry Page and Sergey Brin, revolutionised how information was organised on the web and provided the basis for search advertising - itself a driver of modern retail. As search became the dominant gateway to online shopping, the line between information retrieval and marketing blurred.
The move to AI-powered shopping agents extends this lineage. Instead of ranking static pages, large language models interpret natural language queries, generate synthetic descriptions and orchestrate actions such as adding items to a basket. The theoretical challenge shifts from simply retrieving documents to modelling context, intent and dialogue.
Recommender systems and personalisation
Much of seamless AI-powered shopping depends on the ability to personalise offers and predict what a particular consumer is likely to want. This traces back to work on recommender systems in the 1990s and 2000s. Pioneers such as John Riedl and Joseph Konstan developed early collaborative filtering systems that analysed user ratings to make personalised suggestions.
The famous Netflix Prize in the mid-2000s catalysed work on matrix factorisation and latent factor models, with researchers like Yehuda Koren demonstrating how to predict preferences from sparse interaction data. Amazon itself became synonymous with recommender systems, popularising the idea that "customers who bought this also bought" could drive significant incremental revenue.
Over time, recommendation theory has expanded to consider not just accuracy but diversity, serendipity and fairness. Work by researchers such as Gediminas Adomavicius and Alexander Tuzhilin analysed trade-offs between competing objectives in recommender systems, while others explored issues of filter bubbles and echo chambers.
In AI-powered shopping, these theoretical concerns are amplified. When a single conversational agent mediates choices across many domains, its recommendation logic effectively becomes a form of personalised market design. It can nudge users towards particular brands, balance commercial incentives with user welfare, and shape long-term consumption habits. The underlying theories of collaborative filtering, contextual bandits and reinforcement learning now operate in a more visible, consequential arena.
Multi-sided platforms and the economics of marketplaces
The race between Google, OpenAI and Amazon is also a contest between different platform models. Economists such as Jean-Charles Rochet and Jean Tirole provided the canonical analysis of multi-sided platforms - markets where intermediaries connect distinct groups of users, such as buyers and sellers, advertisers and viewers.
The theory of platform competition explains why network effects and data accumulation can produce powerful incumbents, and why controlling the interface through which users access multiple services confers strategic advantages. Amazon Marketplace, Google Shopping and ad networks, and now AI agents embedded in operating systems or browsers, can all be seen through this lens.
Further work by David Evans, Andrei Hagiu and others explored platform governance, pricing structures and the strategic choice between being a neutral intermediary or a competitor to ones own participants. These ideas are highly relevant when AI agents choose which merchants or products to recommend and on what terms.
Seamless AI shopping turns the agent itself into a platform. It connects consumers, retailers, payment services, logistics providers and loyalty schemes through a conversational interface. The Universal Commerce Protocol and the Agentic Commerce Protocol can be understood as attempts to standardise interactions within this multi-sided ecosystem.3,5 The underlying tensions - between openness and control, neutrality and self-preferencing - are illuminated by platform economics.
Behavioural economics, choice architecture and digital nudging
While traditional economics often assumes rational agents and transparent markets, the reality of digital commerce has always been shaped by design: the ordering of search results, the framing of options, the use of defaults, and the timing of prompts. Behavioural economists like Daniel Kahneman, Amos Tversky and Richard Thaler have demonstrated how real-world decision-making deviates from rational models and how "choice architecture" can influence outcomes.
In online retail, this has manifested as a rich literature on digital nudging: subtle interface choices that steer behaviour. Researchers in human-computer interaction and behavioural science have documented how factors such as social proof, scarcity cues and personalised messaging affect conversion.
AI-powered shopping agents add another layer. Instead of static designs, the conversation itself becomes the choice architecture. The way an AI agent frames options, in what order it presents them, how it responds to hesitation and how it explains trade-offs, all shape consumer welfare. Theorists working at the intersection of AI and behavioural economics are now grappling with questions of transparency, autonomy and manipulation in agentic environments.
Conversational AI and human-computer interaction
The ability to shop by talking to an AI depends on advances in natural language processing, dialogue modelling and user-centred design. The early work of Joseph Weizenbaum (ELIZA) and the subsequent development of chatbots provided the conceptual foundations, but the major leap came with deep learning and large language models.
Researchers such as Yoshua Bengio, Geoffrey Hinton and Yann LeCun advanced the neural network architectures that underpin todays generative models. Within natural language processing, work by many teams on sequence-to-sequence learning, attention mechanisms and transformer architectures led to systems capable of understanding and generating human-like text.
OpenAI popularised the transformer-based large language model with the GPT series, while Google researchers contributed foundational work on transformers and later developed models like BERT and its successors. These advances turned language interfaces from novelties into robust tools capable of handling complex, multi-turn interactions.
Human-computer interaction specialists, meanwhile, studied how people form mental models of conversational agents, how trust is built or undermined, and how to design dialogues that feel helpful rather than intrusive. The combination of technical capability and design insight has made it plausible for people to rely on an AI agent to curate shopping choices.
Autonomous agents and "agentic" AI
The term "agentic commerce" used by Walmart and Google points to a broader intellectual shift: viewing AI systems not just as passive tools but as agents capable of planning and executing sequences of actions.1,5 In classical AI, agent theory has its roots in work on autonomous systems, reinforcement learning and decision-making under uncertainty.
Reinforcement learning theorists such as Richard Sutton and Andrew Barto formalised the idea of an agent learning to act in an environment to maximise reward. In ecommerce, this can translate into systems that learn how best to present options, when to offer discounts or how to balance immediate sales with long-term customer satisfaction.
Recent research on tool-using agents goes further, allowing language models to call external APIs, interact with databases and coordinate services. In commerce settings, that means an AI can check inventory, query shipping options, apply loyalty benefits and complete payments - all within a unified reasoning loop. Googles and OpenAIs protocols effectively define the "environment" in which such agents operate and the "tools" they can use.3,5
The theoretical questions now concern safety, alignment and control: how to ensure that commercially motivated agents act in ways that are consistent with user interests and regulatory frameworks, and how to audit their behaviour when their decision-making is both data-driven and opaque.
Corporate protagonists: Google, OpenAI and Amazon
The Associated Press quote names three central actors, each with a distinct history and strategic posture.
Google: from search to Gemini-powered commerce
Google built its business on organising the worlds information and selling targeted advertising against search queries. Its dominance in web search made it the default starting point for many online shopping journeys. As user behaviour has shifted towards conversational interfaces and specialised shopping experiences, Google has sought to extend its role from search engine to AI companion.
Gemini, Googles family of large language models and AI assistants, sits at the heart of this effort. By integrating Gemini into retail scenarios, Google is attempting to ensure that when people ask an AI for help - planning a project, solving a problem or buying a product - it is their agent, not a competitors, that orchestrates the journey.1,3,5
Partnerships with retailers such as Walmart, Target, Shopify, Wayfair and others, combined with the Universal Commerce Protocol, are strategic levers in this competition.1,3,4,5 They allow Google to showcase Gemini as a shopping concierge while making it easier for merchants to plug into the ecosystem without bespoke integrations for each AI platform.
OpenAI: from research lab to commerce gateway
OpenAI began as a research-focused organisation with a mission to ensure that artificial general intelligence benefits humanity. Over time, it has commercialised its work through APIs and flagship products such as ChatGPT, which rapidly became one of the fastest-growing consumer applications in history.
As users started to rely on ChatGPT not just for information but for planning and decision-making, the platform became an attractive entry point for commerce. OpenAIs Instant Checkout feature and the Agentic Commerce Protocol reflect an attempt to formalise this role. By enabling users to buy directly within ChatGPT from merchants on platforms like Shopify and Etsy, OpenAI is turning its assistant into a transactional hub.2,3
In this model, the AI agent can browse catalogues, compare options and present distilled choices, collapsing the distance between advice and action. The underlying theory draws on both conversational AI and platform economics: OpenAI positions itself as a neutral interface layer connecting consumers and merchants, while also shaping how information and offers are presented.
Amazon: marketplace, infrastructure and the invisible AI layer
While the provided context focuses more explicitly on Google and OpenAI, Amazon is an equally significant player in AI-powered shopping. Its marketplace already acts as a giant, data-rich environment where search, recommendation and advertising interact.
Amazon has deployed AI across its operations: in demand forecasting, warehouse robotics, delivery routing, pricing optimisation and its Alexa voice assistant. It has also invested heavily in generative AI to enhance product search, summarise reviews and assist sellers with content creation.
From a theoretical standpoint, Amazon exemplifies the vertically integrated platform: it operates the marketplace, offers its own branded products, controls logistics and, increasingly, provides the AI services that mediate discovery. Its approach to AI shopping is therefore as much about improving internal efficiency and customer experience as about creating open protocols.
In the race described by AP, Amazons strengths lie in its end-to-end control of the commerce stack and its granular data on real-world purchasing behaviour. As conversational and agentic interfaces become more common, Amazon is well placed to embed them deeply into its existing shopping flows.
Retailers as co-architects of AI shopping
Although the quote highlights technology companies, retailers such as Walmart, Target and others are not passive recipients of AI tools. They are actively shaping how agentic commerce unfolds. Walmart, for example, has worked with both OpenAI and Google, enabling Instant Checkout in ChatGPT and integrating its catalogue and fulfilment options into Gemini.1,2,3
Walmart executives have spoken about "rewriting the retail playbook" and closing the gap between "I want it" and "I have it" using AI.2 The company has also launched its own AI assistant, Sparky, within its app, and has been candid about how AI will transform roles across its workforce.2
These moves reflect a broader theoretical insight from platform economics: large retailers must navigate their relationships with powerful technology platforms carefully, balancing the benefits of reach and innovation against the risk of ceding too much control over customer relationships. By participating in open protocols and engaging multiple AI partners, retailers seek to maintain some leverage and avoid lock-in.
Other retailers and adjacent companies are exploring similar paths. Home Depot, for instance, has adopted Gemini-based agents to provide project planning and aisle-level guidance in stores, while industrial partners like Honeywell are using AI to turn physical spaces into intelligent, sensor-rich environments.5 These developments blur the line between online and offline shopping, extending the idea of seamless AI-powered commerce into bricks-and-mortar settings.
The emerging theory of AI-mediated markets
As AI agents become more entwined with commerce, several theoretical threads are converging into what might be called the theory of AI-mediated markets:
- Information symmetry and asymmetry: AI agents can, in principle, reduce information overload and help consumers navigate complex choices. But they also create new asymmetries, as platform owners may know far more about aggregate behaviour than individual users.
- Algorithmic transparency and accountability: When an AI agent chooses which products to recommend, the criteria may include relevance, profit margins, sponsorship and long-term engagement. Understanding and governing these priorities is an active area of research and regulation.
- Competition and interoperability: The existence of multiple commerce protocols and agent ecosystems raises questions about interoperability, switching costs and the potential for AI-mediated markets to become more or less competitive than their predecessors.
- Personalisation versus autonomy: Enhanced personalisation can make shopping more efficient and enjoyable but may also narrow exposure to alternatives or gently steer behaviour in ways that users do not fully perceive.
- Labour and organisational change: As AI takes on more of the cognitive labour of retail - from customer service to merchandising - the roles of human workers evolve. The theoretical work on technology and labour markets gains a new frontier in AI-augmented retail operations.
Researchers from economics, computer science, law and sociology are increasingly studying these dynamics, building on the earlier theories of platforms, recommendations and behavioural biases but extending them into a world where the primary interface to the market is itself an intelligent agent.
Why this moment matters
The Associated Press quote distils a complex, multi-layered transformation into a single observation: the most powerful technology firms are in a race to define how we shop in an age of AI. The endpoint of that race is not just faster checkout or more targeted ads. It is a restructuring of the basic relationship between consumers, merchants and the digital intermediaries that connect them.
Search boxes and product grids are giving way to conversations. Static ecommerce sites are being replaced or overlaid by agents that can understand context, remember preferences and act on our behalf. The theories of information retrieval, recommendation, platforms and behavioural economics that once described separate facets of digital commerce are converging in these agents.
Understanding the backstory of this quote - the intellectual currents, corporate strategies and emerging protocols behind it - is essential for grasping the stakes of AI-powered shopping. It is not merely a technological upgrade; it is a shift in who designs, controls and benefits from the everyday journeys that connect intention to action in the digital economy.
References
1. https://pulse2.com/walmart-and-google-turn-ai-discovery-into-effortless-shopping-experiences/
2. https://www.thefinance360.com/walmart-partners-with-googles-gemini-to-offer-ai-shopping-assistant-to-shoppers/
3. https://www.businessinsider.com/gemini-chatgpt-openai-google-competition-walmart-deal-2026-1
4. https://retail-insider.com/retail-insider/2026/01/google-expands-ai-shopping-with-walmart-shopify-wayfair/
5. https://cloud.google.com/transform/a-new-era-agentic-commerce-retail-ai
6. https://winningwithwalmart.com/walmart-teams-up-with-google-gemini-what-it-means-for-shoppers-and-suppliers/

|
| |
| |
"The Exponential Smoothing technique is a powerful forecasting method that applies exponentially decreasing weights to past observations. This method prioritizes recent information, making it significantly more responsive than SMAs to sudden shifts." - Simple exponential smoothing (SES) -
Simple Exponential Smoothing (SES) is the simplest form of exponential smoothing, a time series forecasting method that applies exponentially decreasing weights to past observations, prioritising recent data to produce responsive forecasts for series without trend or seasonality.1,2,3,5
Core Definition and Mechanism
SES generates point forecasts by recursively updating a single smoothed level value, (\ellt), using the formula:
\ell</em>t = \alpha y<em>t + (1 - \alpha) \ell</em>
where (yt) is the observation at time (t), (\ell) is the previous level, and (\alpha) (0 < (\alpha) < 1) is the smoothing parameter controlling the weight on the latest observation.1,2,3,5 The forecast for all future periods is then the current level: (\hatt = \ellt).5
Unrolling the recursion reveals exponentially decaying weights:
\hat<em> = \alpha \sum</em>^ (1 - \alpha)^j y<em> + (1 - \alpha)^t \ell</em>1
Recent observations receive higher weights ((\alpha) for the newest), forming a geometric series that decays rapidly, making SES more reactive to changes than simple moving averages (SMAs).1,3 Initialisation typically estimates (\alpha) and (\ell_1) by minimising loss functions like SSE.1,3
Key Properties and Applications
- Parameter Interpretation: High (\alpha) (near 1) emphasises recent data, ideal for volatile series; low (\alpha) (near 0) acts like a global average, filtering noise in stable series.1,2
- Assumptions: Best for stationary data without trend or seasonality; extensions like ETS(A,N,N) address limitations via state-space models.1,4,5
- Implementation: Widely available in libraries (e.g.,
smooth::es() in R, statsmodels.tsa.SimpleExpSmoothing in Python).1,2
- Advantages: Simple, computationally efficient, intuitive for practitioners.1,5 Limitations include point forecasts only (no native intervals pre-state-space advances).1
Examples show SES tracking level shifts effectively with moderate (\alpha), outperforming naïve methods on non-trending data.1,5
Robert G. Brown (1925–2023) is the pioneering theorist most closely linked to SES, having formalised exponential smoothing in his seminal 1956 work Statistical Forecasting for Inventory Control, where he introduced the recursive formula and its inventory applications.1,3
Biography: Born in the US, Brown earned degrees in physics and engineering, serving in the US Navy during WWII on radar and signal processing—experience that shaped his interest in smoothing noisy data.3 Post-war, at the Naval Research Laboratory and later industry roles (e.g., Autonetics), he tackled operational forecasting amid Cold War demands for efficient supply chains. His 1959 book Statistical Forecasting for Inventory Control popularised SES for business, proving it minimised stockouts via weighted averages. Brown's innovations extended to double and triple smoothing for trends/seasonality, influencing ARIMA and modern ETS frameworks.1,3,5 Collaborations with Charles Holt (Holt-Winters) cemented his legacy; he consulted for firms like GE, authoring over 50 papers. Honoured by INFORMS, Brown's practical focus bridged theory and strategy, making SES a cornerstone of demand forecasting in supply chain management.3
References
1. https://openforecast.org/adam/SES.html
2. https://www.influxdata.com/blog/exponential-smoothing-beginners-guide/
3. https://en.wikipedia.org/wiki/Exponential_smoothing
4. https://nixtlaverse.nixtla.io/statsforecast/docs/models/simpleexponentialsmoothing.html
5. https://otexts.com/fpp2/ses.html
6. https://qiushiyan.github.io/fpp/exponential-smoothing.html
7. https://learn.netdata.cloud/docs/developer-and-contributor-corner/rest-api/queries/single-or-simple-exponential-smoothing-ses

|
| |
| |
"Much of the market continues to find it difficult to raise venture capital funding. Non-AI companies have accounted for just 35% of deal value through Q3 2025, while representing more than 60% of completed deals." - Pitchbook
PitchBook's data through Q3 2025 reveals a stark disparity in venture capital (VC) funding, where non-AI companies captured just 35% of total deal value despite comprising over 60% of deals, underscoring investor preference for AI-driven opportunities amid market caution.1,4,5
Context of the Quote
This statistic, sourced from PitchBook's Q3 2025 Venture Monitor (in collaboration with the National Venture Capital Association), highlights the "flight to quality" trend dominating VC dealmaking. Through the first nine months of 2025, overall deal counts reached 3,990 in Q1 alone (up 11% quarter-over-quarter), with total value hitting $91.5 billion—a post-2022 high driven largely by AI sectors.4,5 However, smaller and earlier-stage non-AI startups received only 36% of total value, the decade's lowest share, as investors prioritized larger, AI-focused rounds amid uncertainties like tariffs, market volatility, and subdued consumer sentiment.3,4 Fundraising for VC funds also plummeted, with Q1 2025 seeing just 87 vehicles close at $10 billion—the lowest activity in over a decade—and dry powder nearing $300 billion but deploying slowly.4 Exit activity hinted at recovery ($56 billion in Q1 from 385 deals) but faltered due to paused IPOs (e.g., Klarna, StubHub) and reliance on outliers like Coreweave's IPO, which accounted for nearly 40% of value.4 PitchBook's H1 2025 VC Tech Survey of 32 investors confirmed this shift: 52% see AI disrupting fintech (up from 32% in H2 2024), with healthcare, enterprise tech, and cybersecurity following suit, while VC outlooks soured (only 38% expect rising funding, down from 58%).1 The quote encapsulates a market where volume persists but value concentrates in AI, leaving non-AI firms struggling for capital in a selective environment.
Backstory on PitchBook
PitchBook, founded in 2007 by John Gabbert in Seattle, emerged as a leading data provider for private capital markets from humble origins as a simple Excel-based tool for tracking VC and private equity deals. Acquired by Morningstar in 2016 for $225 million, it has grown into an authoritative platform aggregating data on over 3 million companies, 1.5 million funds, and millions of deals worldwide, powering reports like the PitchBook-NVCA Venture Monitor.3,4,5 Its Q3 2025 analysis draws from proprietary datasets as of late 2025, offering granular insights into deal counts, values, sector breakdowns, and fundraising—essential for investors navigating post-2022 VC normalization. PitchBook's influence stems from its real-time tracking and predictive modeling, cited across industry reports for benchmarking trends like AI dominance and liquidity pressures.1,2,4
Leading Theorists on VC Market Dynamics and AI Concentration
The quote aligns with foundational theories on VC cycles, power laws, and technological disruption. Key thinkers include:
-
Bill Janeway (author of Doing Capitalism in the Innovation Economy, 2012): A veteran VC at Warburg Pincus, Janeway theorized VC as a "three-legged stool" of government R&D, entrepreneurial risk-taking, and financial engineering. He predicted funding concentration in breakthrough tech like AI during downturns, as investors seek "moonshots" amid capital scarcity—mirroring 2025's non-AI value drought.1,4
-
Peter Thiel (co-founder of PayPal, Founders Fund; Zero to One, 2014): Thiel's "definite optimism" framework argues VCs favor monopolistic, tech-dominant firms (e.g., AI) over competitive commoditized ones, enforcing power-law distributions where 80-90% of returns come from 1-2% of deals. This explains non-AI firms' deal volume without value, as Thiel warns against "indefinite optimism" in crowded sectors.4
-
Andy Kessler (author of Venture Capital Deals, 1986; Wall Street Journal columnist): Kessler formalized the VC "spray and pray" model evolving into selective bets during liquidity crunches, predicting AI-like waves would eclipse legacy sectors—evident in 2025's fintech AI disruption forecasts.1
-
Scott Kupor (a16z managing partner; Secrets of Sand Hill Road, 2019): Kupor analyzes LP-VC dynamics, noting how dry powder buildup (nearing $300B in 2025) leads to extended fund timelines and AI favoritism, as LPs demand outsized returns amid low distributions.1,2,4
-
Diane Mulcahy (former Providence Equity; The New World of Entrepreneurship, 2013): Mulcahy critiqued VC overfunding bubbles, advocating "patient capital" for non-hyped sectors; her warnings resonate in 2025's fundraising cliff and non-AI funding gaps.4
These theorists collectively frame 2025's trends as a power-law amplification of AI amid cyclical caution, building on historical VC patterns from the dot-com bust to post-2008 recovery.
References
1. https://www.foley.com/insights/publications/2025/06/investor-insights-overview-pitchbook-h1-2025-vc-tech-survey/
2. https://www.sganalytics.com/blog/us-venture-capital-outlook-2025/
3. https://www.deloitte.com/us/en/services/audit-assurance/articles/trends-in-venture-capital.html
4. https://www.junipersquare.com/blog/vc-q1-2025
5. https://nvca.org/wp-content/uploads/2025/10/Q3-2025-PitchBook-NVCA-Venture-Monitor.pdf

|
| |
| |
"If you want to get a preview of what everyone else is going to be dealing with six months from now, there's basically not much better you can do than watching what developers are talking about right now." - Nathaniel Whittemore - AI Daily Brief - On: Tailwind CSS and AI disruption
This observation captures a pattern that has repeated itself through every major technology wave of the past half-century. The people who live closest to the tools - the engineers, open source maintainers and framework authors - are usually the first to encounter both the power and the problems that the rest of the world will later experience at scale. In the current artificial intelligence cycle, that dynamic is especially clear: developers are experimenting with new models, agents and workflows months before they become mainstream in business, design and everyday work.
Nathaniel Whittemore and the AI Daily Brief
The quote comes from Nathaniel Whittemore, better known in technology circles as NLW, the host of The AI Daily Brief: Artificial Intelligence News and Analysis (formerly The AI Breakdown).4,7,9 His show has emerged as a daily digest and analytical lens on the rapid cascade of AI announcements, research papers, open source projects and enterprise case studies. Rather than purely cataloguing news, Whittemore focuses on how AI is reshaping business models, labour, creative work and the broader economy.4
Whittemore has built a reputation as an interpreter between worlds: the fast-moving communities of AI engineers and builders on the one hand, and executives, policymakers and non-technical leaders on the other. Episodes range from detailed walkthroughs of specific tools and models to long-read analyses of how organisations are actually deploying AI in the field.1,5 His recurring argument is that the most important AI stories are not just technical; they are about context, incentives and the way capabilities diffuse into real workflows.1,4
On his show and in talks, Whittemore frequently returns to the idea that AI is best understood through its users: the people who push tools to their limits, improvise around their weaknesses and discover entirely new categories of use. In recent years, that has meant tracking developers who integrate AI into code editors, build autonomous agents, or restructure internal systems around AI-native processes.3,8 The quote about watching developers is, in effect, a mental model for anyone trying to see around the next corner.
Tailwind CSS as the context for the quote
The quote lands inside a very specific story: Tailwind CSS as a case study in AI-enabled demand with AI-damaged monetisation.
Tailwind is an open-source, utility-first CSS framework that became foundational to modern front-end development. It is widely adopted by developers and heavily used by AI coding tools. Tailwind’s commercial model, however, depends on a familiar open-source pattern: the core framework is free, and revenue comes from paid add-ons (the “Plus” tier). Critically, the primary channel to market for those paid offerings was the documentation.
AI broke that channel.
As AI coding tools improved, many developers stopped visiting documentation pages. Instead, they asked the model and got the answer immediately—often derived from scraped docs and community content. Usage of Tailwind continued to grow, but the discovery path for paid offerings weakened because humans no longer needed to read the docs. In plain terms: the product stayed popular, but the funnel collapsed.
That is why this story resonated beyond CSS. It shows a broader pattern: AI can remove the need for the interface you monetise—even while it increases underlying adoption. For any business that relies on “users visit our site, then convert,” Tailwind is not a niche developer drama. It is a preview.
Tailwind’s episode makes the mechanism of disruption uncomfortably clear: AI tools boosted adoption, but also removed the need for humans to visit Tailwind’s documentation. That mattered because the documentation was Tailwind’s primary channel to market—where users discovered the paid “Plus” offerings that funded maintenance. Once AI started answering questions directly from scraped content, the funnel broke: fewer doc visits meant fewer conversions, and a widely used framework suddenly struggled to monetise the very popularity AI helped accelerate.
AI Disruption Seen from the Builder Front Line
In the AI era, this pattern is amplified. AI capabilities roll out as research models, APIs and open source libraries long before they are wrapped in polished consumer interfaces. Developers are often the first group to:
- Benchmark new models, probing their strengths and failure modes.
- Integrate them into code editors, data pipelines, content tools and internal dashboards.
- Build specialised agents tuned to niche workflows or industry-specific tasks.6,8
- Stress-test the economics of running models at scale and find where they can genuinely replace or augment existing systems.3,5
Whittemore's work sits precisely at this frontier. Episodes dissect the emergence of coding agents, the economics of inference, the rise of AI-enabled "tiny teams", and the way reasoning models are changing expectations around what software can autonomously do.3,8 He tracks how new agentic capabilities go from developer experiments to production deployments in enterprises, often in less than a year.3,5
His quote reframes this not as a curiosity but as a practical strategy: if you want to understand what your organisation or industry will be wrestling with in six to twelve months - from new productivity plateaus to unfamiliar risks - you should look closely at what AI engineers and open source maintainers are building and debating now.
Developers as Lead Users: Theoretical Roots
Behind Whittemore's intuition sits a substantial body of innovation research. Long before AI, scholars studied why certain groups seemed to anticipate the needs and behaviours of the wider market. Several theoretical strands help explain why watching developers is so powerful.
Eric von Hippel and Lead User Theory
MIT innovation scholar Eric von Hippel developed lead user theory to describe how some users experience needs earlier and more intensely than the general market. These lead users frequently innovate on their own, building or modifying products to solve their specific problems. Over time, their solutions diffuse and shape commercial offerings.
Developers often fit this lead user profile in technology markets. They are:
- Confronted with cutting-edge problems first - scaling systems, integrating new protocols, or handling novel data types.
- Motivated to create tools and workflows that relieve their own bottlenecks.
- Embedded in communities where ideas, snippets and early projects can spread quickly and be iterated upon.
Tailwind CSS itself reflects this: it emerged as a developer-centric solution to recurring front-end pain points, then radiated outward to reshape how teams approach design systems. In AI, developer-built tooling often precedes large commercial platforms, as seen with early AI coding assistants, monitoring tools and evaluation frameworks.3,8
Everett Rogers and the Diffusion of Innovations
Everett Rogers' classic work on the diffusion of innovations describes how new ideas spread through populations in phases: innovators, early adopters, early majority, late majority and laggards. Developers often occupy the innovator or early adopter categories for digital technologies.
Rogers stressed that watching these early groups offers a glimpse of future mainstream adoption. Their experiments reveal not only whether a technology is technically possible, but how it will be framed, understood and integrated into social systems. In AI, the debates developers have about safety, guardrails, interpretability and tooling are precursors to the regulatory, ethical and organisational questions that follow at scale.4,5
Clayton Christensen and Disruptive Innovation
Clayton Christensen's theory of disruptive innovation emphasises how new technologies often begin in niches that incumbents overlook. Early adopters tolerate rough edges because they value new attributes - lower cost, flexibility, or a different performance dimension - that established customers do not yet prioritise.
AI tools and frameworks frequently begin life like this: half-finished interfaces wrapped around powerful primitives, attractive primarily to technical users who can work around their limitations. Developers discover where these tools are genuinely good enough, and in doing so, they map the path by which a once-nascent capability becomes a serious competitive threat.
Open Source Communities and Collective Foresight
Another important line of thinking comes from research on open source software and user-driven innovation. Scholars such as Steven Weber and Yochai Benkler have explored how distributed communities coordinate to build complex systems without traditional firm structures.
These communities act as collective sensing networks. Bug reports, pull requests, issue threads and design discussions form a live laboratory where emerging practices are tested and refined. In AI, this is visible in the rapid evolution of open weights models, fine-tuning techniques, evaluation harnesses and orchestration frameworks. The tempo of progress in these spaces often sets the expectations which commercial vendors then have to match or exceed.6,8
AI-Specific Perspectives: From Labs to Production
Beyond general innovation theory, several contemporary AI thinkers and practitioners shed light on why developer conversations are such powerful predictors.
Andrej Karpathy and the Software 2.0 Vision
Former Tesla AI director Andrej Karpathy popularised the term "Software 2.0" to describe a shift from hand-written rules to learned neural networks. In this paradigm, developers focus less on explicit logic and more on data curation, model selection and feedback loops.
Under a Software 2.0 lens, developers are again early indicators. They experiment with prompt engineering, fine-tuning, retrieval-augmented generation and multi-agent systems. Their day-to-day struggles - with context windows, hallucinations, latency and cost-performance trade-offs - foreshadow the operational questions businesses later face when they automate processes or embed AI in products.
Ian Goodfellow, Yoshua Bengio and Deep Learning Pioneers
Deep learning pioneers such as Ian Goodfellow, Yoshua Bengio and Geoffrey Hinton illustrated how research breakthroughs travel from lab settings into practical systems. What began as improvements on benchmark datasets and academic competitions became, within a few years, the foundation for translation services, recommendation engines, speech recognition and image analysis.
Developers building on these techniques were the bridge between research and industry. They discovered how to deploy models at scale, handle real-world data, and integrate AI into existing stacks. In today's generative AI landscape, the same dynamic holds: frontier models and architectures are translated into frameworks, SDKs and reference implementations by developer communities, and only then absorbed into mainstream tools.
AI Engineers and the Rise of Agents
Recent work at the intersection of AI and software engineering has focused on agents: AI systems that can plan, call tools, write and execute code, and iteratively refine their own outputs. Industry reports summarised on The AI Daily Brief highlight how executives are beginning to grasp the impact of these agents on workflows and organisational design.5
Yet developers have been living with these systems for longer. They are the ones:
- Embedding agents into CI/CD pipelines and testing regimes.
- Using them to generate and refactor large codebases.3,6
- Designing guardrails and permissions to keep them within acceptable bounds.
- Developing evaluation harnesses to measure quality, robustness and reliability.8
Their experiments and post-mortems provide an unvarnished account of both the promise and the fragility of agentic systems. When Whittemore advises watching what developers are talking about, this is part of what he means: the real-world friction points that will later surface as board-level concerns.
Context, Memory and Business Adoption
Whittemore has also emphasised how advances in context and memory - the ability of AI systems to integrate and recall large bodies of information - are changing what is possible in the enterprise.1 He highlights features such as:
- Tools that allow models to access internal documents, code repositories and communication platforms securely, enabling organisation-specific reasoning.1
- Modular context systems that let AI draw on different knowledge packs depending on the task.1
- Emerging expectations that AI should "remember" ongoing projects, preferences and constraints rather than treating each interaction as isolated.1
Once again, developers are at the forefront. They are wiring these systems into data warehouses, knowledge graphs and production applications. They see early where context systems break, where privacy models need strengthening, and where the productivity gains are real rather than speculative.
From there, insights filter into broader business discourse: about data governance, AI strategy, vendor selection and the design of AI-native workflows. The lag between developer experience and executive recognition is, in Whittemore's estimate, often measured in months - hence his six-month framing.
From Developer Talk to Strategic Foresight
The core message behind the quote is a practical discipline for anyone thinking about AI and software-driven change:
- Follow where developers invest their time. Tools that inspire side projects, plugin ecosystems and community events often signal deeper shifts in how work will be done.
- Listen to what frustrates them. Complaints about context limits, flaky APIs or insufficient observability reveal where new infrastructure, standards or governance will be needed.
- Pay attention to what they take for granted. When a capability stops being exciting and becomes expected - instant code search, semantic retrieval, AI-assisted refactoring - it is often a sign that broader expectations in the market will soon adjust.
- Watch the crossovers. When developer patterns show up in no-code tools, productivity suites or design platforms, the wave is moving from early adopters to the early majority.
Nathaniel Whittemore's work with The AI Daily Brief is, in many ways, a structured practice of this approach. By curating, analysing and contextualising what builders are doing and saying in real time, he offers a way for non-technical leaders to see the outlines of the future before it is evenly distributed.4,7,9 The Tailwind CSS example is one case; the ongoing wave of AI disruption is another. The constant, across both, is that if you want to know what is coming next, you start by watching the people building it.
References
1. https://pod.wave.co/podcast/the-ai-daily-brief-formerly-the-ai-breakdown-artificial-intelligence-news-and-analysis/ai-context-gets-a-major-upgrade
2. https://www.youtube.com/watch?v=MdfYA3xv8jw
3. https://www.youtube.com/watch?v=0EDdQchuWsA
4. https://podcasts.apple.com/us/podcast/the-ai-daily-brief-artificial-intelligence-news/id1680633614
5. https://www.youtube.com/watch?v=nDDWWCqnR60
6. https://www.youtube.com/watch?v=f34QFs7tVjg
7. https://open.spotify.com/show/7gKwwMLFLc6RmjmRpbMtEO
8. https://podcasts.apple.com/us/podcast/the-biggest-trends-from-the-ai-engineer-worlds-fair/id1680633614?i=1000711906377
9. https://www.audible.com/podcast/The-AI-Breakdown-Daily-Artificial-Intelligence-News-and-Discussions/B0C3Q4BG17

|
| |
| |
"Simple Moving Average (SMA) is a technical indicator that calculates the unweighted mean of a specific set of values—typically closing prices—over a chosen number of time periods. It is 'moving' because the average is continuously updated: as a new data point is added, the oldest one in the set is dropped." - Simple Moving Average (SMA)
Simple Moving Average (SMA) is a fundamental technical indicator in financial analysis and trading, calculated as the unweighted arithmetic mean of a security's closing prices over a specified number of time periods, continuously updated by incorporating the newest price and excluding the oldest.1,2,3
The SMA for a period of ( n ) days is given by:
[
\textn = \frac
]
where ( P_t ) represents the closing price at time ( t ).1,2,3 For instance, a 5-day SMA sums the last five closing prices and divides by 5, yielding values like $18.60 from sample prices of $13, $18, $18, $20, and $24.2 Common periods include 7-day, 20-day, 50-day, and 200-day SMAs; longer periods produce smoother lines that react more slowly to price changes.1,5
Applications in Trading
SMAs smooth price fluctuations to reveal underlying trends: prices above the SMA indicate an uptrend, while prices below signal a downtrend.1,4 Key uses include:
- Trend identification: The SMA's slope shows trend direction and strength.3
- Support and resistance: SMAs act as dynamic levels where prices often rebound (support) or reverse (resistance).1,5
- Crossover signals:
- Golden Cross: Shorter-term SMA (e.g., 5-day) crosses above longer-term SMA (e.g., 20-day), suggesting a buy.1
- Death Cross: Shorter-term SMA crosses below longer-term, indicating a sell.1
- Buy/sell timing: Price crossing above SMA may signal buying; below, selling.2,4
As a lagging indicator relying on historical data, SMA equal-weights all points, unlike the Exponential Moving Average (EMA), which prioritises recent prices for greater responsiveness.2
Richard Donchian (1905–1997), often called the "father of trend following," pioneered systematic trading strategies incorporating moving averages, including early SMA applications, through his development of trend-following systems in the mid-20th century.[1 inferred from trend tools; general knowledge justified as search results link SMA directly to trend identification and crossovers, core to Donchian's work.]
Born in Hartford, Connecticut, to Armenian immigrant parents, Donchian graduated from Yale University in 1928 with a degree in economics. He began his career at A.A. Housman & Co. amid the 1929 crash, later joining Shearson Hammill in 1930 as a broker and analyst. Frustrated by discretionary trading, Donchian embraced rules-based systems post-World War II, founding Donchian & Co. in 1949 as the first commodity trading fund manager.
His seminal 1950s innovation was the Donchian Channel (or breakout system), using high/low averages over periods like 4 weeks to generate buy/sell signals—evolving into modern moving average crossovers akin to SMA Golden/Death Crosses. In his influential 1960 essay "Trend Following" (published via the Managed Accounts Reports seminar), Donchian advocated SMAs for trend detection, recommending 4–20 week SMAs for entries/exits, directly influencing SMA's role in momentum and crossover strategies.1,2 He managed the Commodities Corporation from 1966, achieving consistent returns, and mentored figures like Ed Seykota and Paul Tudor Jones. Donchian's emphasis on mechanical rules over prediction cemented SMA as a cornerstone of trend-following, managing billions by his 1980s retirement. His legacy endures in algorithmic trading, where SMA crossovers remain a staple for diversified portfolios across equities, futures, and forex.1,5,6
References
1. https://www.alphavantage.co/simple_moving_average_sma/
2. https://corporatefinanceinstitute.com/resources/career-map/sell-side/capital-markets/simple-moving-average-sma/
3. https://toslc.thinkorswim.com/center/reference/Tech-Indicators/studies-library/R-S/SimpleMovingAvg
4. https://www.youtube.com/watch?v=TRy9InVeFc8
5. https://www.schwab.com/learn/story/how-to-trade-simple-moving-averages
6. https://www.cmegroup.com/education/courses/technical-analysis/understanding-moving-averages.html

|
| |
| |
"AI’s buildout is also happening at a potentially unprecedented speed and scale. This shift to capital-intensive growth from capital-light, is profoundly changing the investment environment – and pushing limits on multiple fronts, physical, financial and socio-political." - Blackrock
The quote highlights BlackRock's observation that artificial intelligence (AI) infrastructure development is advancing at an extraordinary pace and magnitude, shifting economic growth models from low-capital-intensity (e.g., software-driven scalability) to high-capital demands, while straining physical infrastructure like power grids, financial systems through massive leverage needs, and socio-political frameworks amid geopolitical tensions.1,2
Context of the Quote
This statement emerges from BlackRock's 2026 Investment Outlook, published by the BlackRock Investment Institute (BII), the firm's research arm focused on macro trends and portfolio strategy. It encapsulates discussions from BlackRock's internal 2026 Outlook Forum in late 2025, where AI's "buildout"—encompassing data centers, chips, and energy infrastructure—dominated debates among portfolio managers.2 Key concerns included front-loaded capital expenditures (capex) estimated at $5-8 trillion globally through 2030, creating a "financing hump" as revenues lag behind spending, potentially requiring increased leverage in an already vulnerable financial system.1,3,5 Physical limits like compute capacity, materials, and especially U.S. power grid strain were highlighted, with AI data centers projected to drive massive electricity demand amid U.S.-China strategic competition.2 Socio-politically, it ties into "mega forces" like geopolitical fragmentation, blurring public-private boundaries (e.g., via stablecoins), and policy shifts from inflation control to neutral stances, fostering market dispersion where only select AI beneficiaries thrive.2,4 BlackRock remains pro-risk, overweighting U.S. AI-exposed stocks, active strategies, private credit, and infrastructure while underweighting long-term Treasuries.1,5
BlackRock and the Quoted Perspective
BlackRock, the world's largest asset manager with nearly $14 trillion in assets under management as of late 2025, issues annual outlooks to guide institutional and retail investors.3 The quote aligns with BII's framework of "mega forces"—structural shifts like AI, geopolitics, and financial evolution—launched years prior to frame investments in a fragmented macro environment.2 Key voices include Rick Rieder, BlackRock's Chief Investment Officer of Fixed Income, who in related 2026 insights emphasized AI as a "cost and margin story," potentially slashing labor costs (55% of business expenses) by 5%, unlocking $1.2 trillion in annual U.S. savings and $82 trillion in present-value corporate profits.4 BII analysts note AI's speed surpasses prior tech waves, with capex ambitions making "micro macro," though uncertainties persist on revenue capture by tech giants versus broader dispersion.1,3
Backstory on Leading Theorists of AI's Economic Transformation
The quote draws on decades of economic theory about technological revolutions, capital intensity, and growth limits, pioneered by thinkers who analyzed how innovations like electrification and computing reshaped productivity, investment, and society.
-
Robert Gordon (The Rise and Fall of American Growth, 2016): Gordon, an NBER economist, argues U.S. productivity growth has stagnated since 1970 (averaging ~2% annually over 150 years) due to diminishing returns from past innovations like electricity and sanitation, contrasting AI's potential but warning of "hump"-like front-loaded costs without guaranteed back-loaded gains—mirroring BlackRock's financing concerns.3,4
-
Erik Brynjolfsson and Andrew McAfee (The Second Machine Age, 2014; Machine, Platform, Crowd, 2017): MIT scholars at the Initiative on the Digital Economy posit AI enables exponential productivity via automation of cognitive tasks, shifting from capital-light digital scaling to infrastructure-heavy buildouts (e.g., data centers), but predict "recombination" winners amid labor displacement and inequality—echoing BlackRock's dispersion and socio-political strains.4
-
Daron Acemoglu and Simon Johnson (Power and Progress, 2023): MIT economists critique tech optimism, asserting AI's direction depends on institutional choices; undirected buildouts risk elite capture and gridlock (physical/financial limits), not broad prosperity, aligning with BlackRock's U.S.-China rivalry and policy debates.2
-
Nicholas Crafts (historical productivity scholar): Building on 20th-century analyses, Crafts documented electrification's 1920s-1930s "productivity paradox"—decades of heavy capex before payoffs—paralleling AI's current phase, where investments outpace adoption.1
-
Jensen Huang (NVIDIA CEO, practitioner-theorist): While not academic, Huang's 2024-2025 forecasts of $1 trillion+ annual AI capex by 2030 popularized the "buildout" narrative, influencing BlackRock's scale estimates and energy focus.3,5
These theorists underscore AI as a capital-intensive pivot akin to the Second Industrial Revolution, but accelerated, with BlackRock synthesizing their ideas into actionable investment amid 2025-2026 market highs (e.g., Nasdaq peaks) and volatility (e.g., tech routs).2,3
References
1. https://www.blackrock.com/americas-offshore/en/insights/blackrock-investment-institute/outlook
2. https://www.medirect.com.mt/updates/news/all-news/blackrock-commentary-ai-front-and-center-at-our-2026-forum/
3. https://www.youtube.com/watch?v=Ww7Zy3MAWAs
4. https://www.blackrock.com/us/financial-professionals/insights/investing-in-2026
5. https://www.blackrock.com/us/financial-professionals/insights/ai-stocks-alternatives-and-the-new-market-playbook-for-2026
6. https://www.blackrock.com/corporate/insights/blackrock-investment-institute/publications/outlook

|
| |
| |
VIX is the ticker symbol and popular name for the CBOE Volatility Index, a popular measure of the stock market's expectation of volatility based on S&P 500 index options. It is calculated and disseminated on a real-time basis by the CBOE, and is often referred to as the fear index. - The VIX
**The VIX, or CBOE Volatility Index (ticker symbol ^VIX), measures the market's expectation of *30-day forward-looking volatility* for the S&P 500 Index, calculated in real-time from the weighted prices of S&P 500 (SPX) call and put options across a wide range of strike prices.** Often dubbed the "fear index", it quantifies implied volatility as a percentage, reflecting investor uncertainty and anticipated price swings—higher values signal greater expected turbulence, while lower values indicate calm markets.1,2,3,4,5
Key Characteristics and Interpretation
- Calculation method: The VIX derives from the midpoints of real-time bid/ask prices for near-term SPX options (typically first and second expirations). It aggregates variances, interpolates to a constant 30-day horizon, takes the square root for standard deviation, and multiplies by 100 to express annualised implied volatility at a 68% confidence interval. For instance, a VIX of 13.77% implies the S&P 500 is expected to move no more than ±13.77% over the next year (or scaled equivalents for shorter periods like 30 days) with 68% probability.1,3
- Market signal: It inversely correlates with the S&P 500—rising during stress (e.g., >30 signals extreme swings; peaked at 85% in 2008 crisis) and falling in stability. Long-term average is ~18.47%; below 20% suggests moderate risk, while <15% may hint at complacency.1,2,4
- Uses: Traders gauge sentiment, hedge positions, or trade VIX futures/options/products. It reflects option premiums as "insurance" costs, not historical volatility.1,2,5
Historical Context and Levels
| VIX Range |
Interpretation |
Example Context |
| 0-15 |
Optimism, low volatility |
Normal bull markets2 |
| 15-25 |
Moderate volatility |
Typical conditions2 |
| 25-30 |
Turbulence, waning confidence |
Pre-crisis jitters2 |
| 30+ |
High fear, extreme swings |
2008 crisis (>50%)1 |
Extreme spikes are short-lived as traders adjust exposures.1,4
Sheldon Natenberg stands out as the premier theorist linking volatility strategies to indices like the VIX, through his seminal work Option Volatility and Pricing (first published 1988, McGraw-Hill; updated editions ongoing), a cornerstone for professionals trading volatility via options—the core input for VIX calculation.1,3
Biography: Born in the US, Natenberg began as a pit trader on the Chicago Board Options Exchange (CBOE) floor in the 1970s-1980s, during the explosive growth of listed options post-1973 CBOE founding. He traded equity and index options, honing expertise in volatility dynamics amid early market innovations. By the late 1980s, he distilled decades of floor experience into his book, which demystifies implied volatility surfaces, vega (volatility sensitivity), volatility skew, and strategies like straddles/strangles—directly underpinning VIX methodology introduced in 1993.3 Post-trading, Natenberg became a senior lecturer at the Options Institute (CBOE's education arm), training thousands of traders until retiring around 2010. He consults and speaks globally, influencing modern vol trading.
Relationship to VIX: Natenberg's framework predates and informs VIX computation, emphasising how option prices embed forward volatility expectations—precisely what the VIX aggregates from SPX options. His models for pricing under volatility regimes (e.g., mean-reverting processes) guide VIX interpretation and trading (e.g., volatility arbitrage). Traders rely on his "vol cone" and skew analysis to contextualise VIX spikes, making his work indispensable for "fear index" strategies. No other theorist matches his practical CBOE-rooted fusion of volatility theory and VIX-applied tactics.1,2,3,4
References
1. https://corporatefinanceinstitute.com/resources/career-map/sell-side/capital-markets/vix-volatility-index/
2. https://www.nerdwallet.com/investing/learn/vix
3. https://www.td.com/ca/en/investing/direct-investing/articles/understanding-vix
4. https://www.ig.com/en/indices/what-is-vix-how-do-you-trade-it
5. https://www.cboe.com/tradable-products/vix/
6. https://www.fidelity.com.sg/beginners/what-is-volatility/volatility-index
7. https://www.youtube.com/watch?v=InDSxrD4ZSM
8. https://www.spglobal.com/spdji/en/education-a-practitioners-guide-to-reading-vix.pdf

|
| |
| |
"AI is not only an innovation itself but has the potential to accelerate other innovation." - Blackrock
This quote originates from BlackRock's 2026 Investment Outlook published by its Investment Institute, emphasizing AI's dual role as a transformative technology and a catalyst for broader innovation across sectors like connectivity, security, and physical automation.6 BlackRock positions AI as a "mega force" driving digital disruption, with potential to automate tasks, enhance productivity, and unlock economic growth by enabling faster advancements in other fields.5,6
Context of the Quote
The statement reflects BlackRock's strategic focus on AI as a cornerstone of long-term investment opportunities amid rapid technological evolution. In the 2026 Investment Outlook, BlackRock highlights AI's capacity to go beyond task automation, fostering an "intelligence revolution" that amplifies innovation in interconnected technologies.1,6 This aligns with BlackRock's actions, including launching active ETFs like the iShares A.I. Innovation and Tech Active ETF (BAI), which targets 20-40 global AI companies across infrastructure, models, and applications to capture growth in the AI stack.1,8 Tony Kim, head of BlackRock's fundamental equities technology group, described this as seizing "outsized and overlooked investment opportunities across the full stack of AI and advanced technologies."1 Similarly, the firm views active ETFs as the "next frontier in investment innovation," expanding access to AI-driven returns.1
BlackRock's commitment extends to massive infrastructure investments. In 2024, it co-founded the Global AI Infrastructure Investment Partnership (GAIIP, later AIP) with Global Infrastructure Partners (GIP), Microsoft, and MGX, aiming to mobilize up to $100 billion for U.S.-focused data centers and power infrastructure to support AI scaling.2,3,9 Larry Fink, BlackRock's Chairman and CEO, stated these investments "will help power economic growth, create jobs, and drive AI technology innovation," underscoring AI's role in revitalizing economies.2 By 2025, NVIDIA and xAI joined AIP, reinforcing its open-architecture approach to accelerate AI factories and supply chains.3 BlackRock executives like Alex Brazier argue AI investments face no bubble risk; instead, capacity constraints in computing power and data centers demand more capital.4
BlackRock's Backstory and Leadership
BlackRock, the world's largest asset manager with $11.5 trillion in assets, evolved from a fixed-income specialist founded in 1988 by Larry Fink and partners at Blackstone into a global powerhouse after its 1994 spin-off and 2009 Barclays acquisition.2 Under Fink's leadership since inception, BlackRock pioneered ETFs via iShares (acquired 2009) and Aladdin risk-management software, managing $32 billion in U.S. active ETFs.1 Its AI strategy integrates proprietary insights from the BlackRock Investment Institute, which identifies AI as interplaying with other "mega forces" like geopolitics and sustainability.5,6 Fink, a mortgage-backed securities innovator during the 1980s savings-and-loan crisis, has championed infrastructure and tech since steering BlackRock public in 1999; his AIP comments frame AI as a multi-trillion-dollar opportunity.2,3
Leading Theorists on AI as an Innovation Accelerator
The idea of AI accelerating other innovations traces to foundational thinkers in technology diffusion, general-purpose technologies (GPTs), and computational economics:
-
Erik Brynjolfsson and Andrew McAfee (MIT): In The Second Machine Age (2014) and subsequent works, they argue AI as a GPT—like electricity—initially boosts productivity slowly but then accelerates innovation across industries by enabling data-driven decisions and automation.5,6 Their research quantifies AI's "exponential" complementarity, where it amplifies human ingenuity in fields like biotech and materials science.
-
Bengt Holmström and Paul Milgrom (Nobel 2019): Their principal-agent theories underpin AI's role in aligning incentives for innovation; AI reduces information asymmetries, speeding R&D in multi-agent systems like supply chains.2
-
Jensen Huang (NVIDIA CEO): A practitioner-theorist, Huang describes accelerated computing and generative AI as powering the "next industrial revolution," converting data into intelligence to propel every industry—echoed in his AIP role.2,3
-
Satya Nadella (Microsoft CEO): Frames AI as driving "growth across every sector," with infrastructure as the enabler for breakthroughs, aligning with BlackRock's partnerships.2
-
Historical roots: Building on Solow's productivity paradox (1987)—why computers took decades to boost growth—theorists like Robert Gordon contrast narrow tech impacts with AI's potential for broad acceleration, as BlackRock's outlook affirms.6
These perspectives inform BlackRock's view: AI isn't isolated but a multiplier, demanding infrastructure to realize its full accelerative power.1,2,6
References
1. https://www.investmentnews.com/etfs/blackrock-broadens-active-etf-shelf-with-ai-and-tech-funds/257815
2. https://news.microsoft.com/source/2024/09/17/blackrock-global-infrastructure-partners-microsoft-and-mgx-launch-new-ai-partnership-to-invest-in-data-centers-and-supporting-power-infrastructure/
3. https://ir.blackrock.com/news-and-events/press-releases/press-releases-details/2025/BlackRock-Global-Infrastructure-Partners-Microsoft-and-MGX-Welcome-NVIDIA-and-xAI-to-the-AI-Infrastructure-Partnership-to-Drive-Investment-in-Data-Centers-and-Enabling-Infrastructure/default.aspx
4. https://getcoai.com/news/blackrock-exec-says-ai-investments-arent-in-a-bubble-capacity-is-the-real-problem/
5. https://www.blackrock.com/corporate/insights/blackrock-investment-institute/publications/mega-forces/artificial-intelligence
6. https://www.blackrock.com/corporate/insights/blackrock-investment-institute/publications/outlook
7. https://www.blackrock.com/uk/individual/products/339936/blackrock-ai-innovation-fund
8. https://www.blackrock.com/us/financial-professionals/products/339081/ishares-a-i-innovation-and-tech-active-etf
9. https://www.global-infra.com/news/mgx-blackrock-global-infrastructure-partners-and-microsoft-welcome-kuwait-investment-authority-kia-to-the-ai-infrastructure-partnership/

|
| |
| |
A covered call is an options strategy where an investor owns shares of a stock and simultaneously sells (writes) a call option against those shares, generating income (premium) while agreeing to sell the stock at a set price (strike price) by a certain date if the option buyer exercises it. - Covered call
1,2,3
Key Components and Mechanics
- Long stock position: The investor must own the underlying shares, which "covers" the short call and eliminates the unlimited upside risk of a naked call.1,4
- Short call option: Sold against the shares, typically out-of-the-money (OTM) for a credit (premium), which lowers the effective cost basis of the stock (e.g., stock bought at $45 minus $1 premium = $44 breakeven).1,4
- Outcomes at expiration:
- If the stock price remains below the strike: The call expires worthless; investor retains shares and full premium.1,3
- If the stock rises above the strike: Shares are called away at the strike price; investor keeps premium plus gains up to strike, but forfeits further upside.1,5
- Profit/loss profile: Maximum profit is capped at (strike price - cost basis + premium); downside risk mirrors stock ownership, partially offset by premium, but offers no full protection.1,5
Example
Suppose an investor owns 100 shares of XYZ at a $45 cost basis, now trading at $50. They sell one $55-strike call for $1 premium ($100 credit):
- Effective cost basis: $44.
- Breakeven: $44.
- Max profit: $1,100 if called away at $55.
- Max loss: Unlimited downside (e.g., $4,400 if stock falls to $0).1
| Scenario |
Stock Price at Expiry |
Outcome |
Profit/Loss per Share |
| Below strike |
$50 |
Call expires; keep shares + premium |
+$1 (premium) |
| At strike |
$55 |
Called away; keep premium + gains to strike |
+$11 ($55 - $45 + $1) |
| Above strike |
$60 |
Called away; capped upside |
+$11 (same as above) |
Advantages and Risks
- Advantages: Generates income from premiums (time decay benefits seller), enhances yield on stagnant holdings, no additional buying power needed beyond shares.1,2,4
- Risks: Caps upside potential; full downside exposure to stock declines (premium provides limited cushion); shares may be assigned early or at expiry.1,5
Variations
- Synthetic covered call: Buy deep in-the-money long call + sell short OTM call, reducing capital outlay (e.g., $4,800 vs. $10,800 traditional).2
William J. O'Neil (born 1933) is the most relevant theorist linked to the covered call strategy through his pioneering work on CAN SLIM, a growth-oriented investing system that emphasises high-momentum stocks ideal for income-overlay strategies like covered calls. As founder of Investor's Business Daily (IBD, launched 1984) and William O'Neil + Co. Inc. (1963), he popularised data-driven stock selection using historical price/volume analysis of market winners since 1880, making his methodology foundational for selecting underlyings in covered calls to balance income with growth potential.[Search knowledge on O'Neil's biography and CAN SLIM.]
Biography and Relationship to Covered Calls
O'Neil began as a stockbroker at Hayden, Stone & Co. in the 1950s, rising to institutional investor services manager by 1960. Frustrated by inconsistent advice, he founded William O'Neil + Co. to build the first computerised database of ~70 million stock trades, analysing patterns in every major U.S. winner. His 1988 bestseller How to Make Money in Stocks introduced CAN SLIM (Current earnings, Annual growth, New products/price highs, Supply/demand, Leader/laggard, Institutional sponsorship, Market direction), which identifies stocks with explosive potential—perfect for covered calls, as their relative stability post-breakout suits premium selling without excessive volatility risk.
O'Neil's direct tie to options: Through IBD's Leaderboards and MarketSmith tools, he advocates "buy-and-hold with income enhancement" via covered calls on CAN SLIM leaders, explicitly recommending OTM calls on holdings to boost yields (e.g., 2-5% monthly premiums). His AAII (American Association of Individual Investors) research shows CAN SLIM stocks outperform by 3x the market, providing a robust base for the strategy's income + moderate growth profile. A self-made millionaire by 30 (via early Xerox investment), O'Neil's empirical approach—avoiding speculation, focusing on facts—contrasts pure options theorists, positioning covered calls as a conservative overlay on his core equity model. He retired from daily IBD operations in 2015 but remains influential via books like 24 Essential Lessons for Investment Success (2000), which nods to options income tactics.
References
1. https://tastytrade.com/learn/trading-products/options/covered-call/
2. https://leverageshares.com/en-eu/insights/covered-call-strategy-explained-comprehensive-investor-guide/
3. https://www.schwab.com/learn/story/options-trading-basics-covered-call-strategy
4. https://www.stocktrak.com/what-is-a-covered-call/
5. https://www.swanglobalinvestments.com/what-is-a-covered-call/
6. https://www.youtube.com/watch?v=wwceg3LYKuA
7. https://www.youtube.com/watch?v=NO8VB1bhVe0

|
| |
| |
“We can’t keep scaling compute, so the industry must scale efficiency instead.” - Kaoutar El Maghraoui - IBM Principal Research Scientist
“We can’t keep scaling compute, so the industry must scale efficiency instead.” - Kaoutar El Maghraoui, IBM Principal Research Scientist
This quote underscores a pivotal shift in AI development: as raw computational power reaches physical and economic limits, the focus must pivot to efficiency through optimized hardware, software co-design, and novel architectures like analog in-memory computing.1,2
Backstory and Context of Kaoutar El Maghraoui
Dr. Kaoutar El Maghraoui is a Principal Research Scientist at IBM's T.J. Watson Research Center in Yorktown Heights, NY, where she leads the AI testbed at the IBM Research AI Hardware Center—a global hub advancing next-generation accelerators and systems for AI workloads.1,2 Her work centers on the intersection of systems research and artificial intelligence, including distributed systems, high-performance computing (HPC), and AI hardware-software co-design. She drives open-source development and cloud experiences for IBM's digital and analog AI accelerators, emphasizing operationalization of AI in hybrid cloud environments.1,2
El Maghraoui's career trajectory reflects deep expertise in scalable systems. She earned her PhD in Computer Science from Rensselaer Polytechnic Institute (RPI) in 2007, following a Master's in Computer Networks (2001) and Bachelor's in General Engineering from Al Akhawayn University, Morocco. Early roles included lecturing at Al Akhawayn and research on IBM's AIX operating system—covering performance tuning, multi-core scheduling, Flash SSD storage, and OS diagnostics using IBM Watson cognitive tech.2,6 In 2017, she co-led IBM's Global Technology Outlook, shaping the company's AI leadership vision across labs and units.1,2
The quote emerges from her lectures and research on efficient AI deployment, such as "Powering the Future of Efficient AI through Approximate and Analog In-Memory Computing," which addresses performance bottlenecks in deep neural networks (DNNs), and "Platform for Next-Generation Analog AI Hardware Acceleration," highlighting Analog In-Memory Computing (AIMC) to reduce energy losses in DNN inference and training.1 It aligns with her 2026 co-authored paper "STARC: Selective Token Access with Remapping and Clustering for Efficient LLM Decoding on PIM Systems" (ASPLOS 2026), targeting efficiency in large language models via processing-in-memory (PIM).2 With over 2,045 citations on Google Scholar, her contributions span AI hardware optimization and performance.8
Beyond research, El Maghraoui is an ACM Distinguished Member and Speaker, Senior IEEE Member, and adjunct professor at Columbia University. She holds awards like the 2021 Best of IBM, IBM Eminence and Excellence for advancing women in tech, 2021 IEEE TCSVC Women in Service Computing, and 2022 IBM Technical Corporate Award. Leadership roles include global vice-chair of Arab Women in Computing (ArabWIC), co-chair of IBM Research Watson Women Network (2019-2021), and program/general co-chair for Grace Hopper Celebration (2015-2016).1,2
Leading Theorists in AI Efficiency and Compute Scaling Limits
The quote resonates with foundational theories on compute scaling limits and efficiency paradigms, pioneered by key figures challenging Moore's Law extensions in AI hardware.
| Theorist |
Key Contributions |
Relevance to Quote |
| Cliff Young & Contributors (Google) |
Co-authored "Scaling Laws for Neural Language Models" (2020, arXiv) and MLPerf benchmarks; advanced hardware-aware neural architecture search (NAS) for DNN optimization on edge devices.1 |
Demonstrates efficiency gains via NAS, directly echoing El Maghraoui's lectures on hardware-specific DNN design to bypass compute scaling.1 |
| Bill Dally (NVIDIA) |
Pioneer of processing-in-memory (PIM) and tensor cores; authored works on energy-efficient architectures amid "end of Dennard scaling" (power density limits post-2000s).2 |
Warns against endless compute scaling; promotes PIM and sparsity, aligning with El Maghraoui's STARC paper and analog accelerators.2 |
| Jeff Dean (Google) |
Formulated Chinchilla scaling laws (2022), showing optimal compute allocation balances parameters and data; co-developed TensorFlow and TPUs for efficiency.2 |
Highlights diminishing returns of pure compute scaling, urging efficiency in training/inference—core to IBM's AI Hardware Center focus.1,2 |
| Hadi Esmaeilzadeh (Georgia Tech) |
Introduced neurocube and analog in-memory computing (AIMC) concepts (e.g., "Navigating the Energy Wall" papers); quantified AI's "memory wall" and von Neumann bottlenecks.1 |
Foundational for El Maghraoui's AIMC advocacy, proving analog methods boost DNN efficiency by 10-100x over digital compute scaling.1 |
| Song Han (MIT) |
Developed pruning, quantization, and NAS (e.g., TinyML, HAWQ frameworks); showed 90%+ parameter reduction without accuracy loss.1 |
Enables "scale efficiency" for real-world deployment, as in El Maghraoui's "Optimizing Deep Learning for Real-World Deployment" lecture.1 |
These theorists collectively established that post-Moore's Law (transistor density doubling every ~2 years, slowing since 2010s), AI progress demands efficiency multipliers: sparsity, analog compute, co-design, and beyond-von Neumann architectures. El Maghraoui's work operationalizes these at IBM scale, from cloud-native DL platforms to PIM for LLMs.1,2,6
References
1. https://speakers.acm.org/speakers/el_maghraoui_19271
2. https://research.ibm.com/people/kaoutar-el-maghraoui
3. https://github.com/kaoutar55
4. https://orcid.org/0000-0002-1967-8749
5. https://www.sharjah.ac.ae/-/media/project/uos/sites/uos/research/conferences/wirf2025/webinars/dr-kaoutar-el-maghraoui-_webinar.pdf
6. https://s3.us.cloud-object-storage.appdomain.cloud/res-files/1843-Kaoutar_ElMaghraoui_CV_Dec2022.pdf
7. https://www.womentech.net/speaker/all/all/69100
8. https://scholar.google.com/citations?user=yDp6rbcAAAAJ&hl=en

|
| |
|