‌
Global Advisors
‌
‌
‌

Our selection of the top business news sources on the web.

AM edition. Issue number 1208

Latest 10 stories. Click the button for more.

Read More
‌
‌
‌

Quote: J.P. Morgan - On resources

"We believe the clean technology transition is igniting a new supercycle in critical commodities, with natural resource companies emerging as winners." - J.P. Morgan - On resources

When J.P. Morgan Asset Management framed the clean technology transition in these terms, it captured a profound shift underway at the intersection of climate policy, industrial strategy and global capital allocation.1,5 The quote stands at the heart of their analysis of how decarbonisation is reshaping demand for metals, minerals and energy, and why this is likely to support elevated commodity prices for years rather than months.1

The immediate context is the rapid acceleration of the energy transition. Governments have committed to net zero pathways, corporates face growing regulatory and investor pressure to decarbonise, and consumers are adopting electric vehicles and clean technologies at scale. J.P. Morgan argues that this is not merely an environmental story, but an economic retooling comparable in scale to previous industrial revolutions.1,4

Their research highlights two linked dynamics. First, the decarbonised economy is less fuel-intensive but far more materials-intensive. Replacing fossil fuel power with renewables requires vast quantities of copper, aluminium, nickel, lithium, cobalt, manganese and graphite to build solar and wind farms, grids and storage systems.1 Second, the speed of this transition matters as much as its direction. Even under conservative scenarios, J.P. Morgan estimates substantial increases in demand for critical minerals by 2030; under more ambitious net zero pathways, demand could rise by around 110% over that period, on top of the 50% increase already seen in the previous decade.1

In this framing, natural resource companies - particularly miners and producers of critical minerals - shift from being perceived purely as part of the old carbon-heavy economy to being central enablers of clean technologies. J.P. Morgan points out that while fossil fuel demand will decline over time, the scale of required investment in metals and minerals, as well as transmission infrastructure, effectively re-ranks many resource businesses as strategic assets for the low-carbon future.1 Valuations that once reflected cyclical, late-stage industries may therefore underestimate the structural demand embedded in net zero commitments.

The quote also reflects J.P. Morgan's broader thinking on commodity and energy supercycles. Their research on energy markets describes a supercycle as a sustained period of elevated prices driven by structural forces that can last for a decade or more.3,4 In previous eras, those forces included post-war reconstruction and the rise of China as the world's industrial powerhouse. Today, they see the combination of chronic underinvestment in supply, intensifying climate policy, and rising demand for both traditional and clean energy as setting the stage for a new, complex supercycle.2,3,4

Within the firm, analysts have argued that higher-for-longer interest rates raise the cost of debt and equity for energy producers, reinforcing supply discipline and pushing up the marginal cost of production.3 At the same time, the rapid build-out of renewables is constrained by supply chain, infrastructure and key materials bottlenecks, meaning that legacy fuels still play a significant role even as capital increasingly flows towards clean technologies.3 This dual dynamic - structural demand for critical minerals on the one hand and a constrained, more disciplined fossil fuel sector on the other - underpins the conviction that a supercycle is forming across parts of the commodity complex.

The idea of commodity supercycles predates the current climate transition and has been shaped by several generations of theorists and empirical researchers. In the mid-20th century, economists such as Raúl Prebisch and Hans Singer first highlighted the long-term terms-of-trade challenges faced by commodity exporters, noting that prices for primary products tended to fall relative to manufactured goods over time. Their work prompted an early focus on structural forces in commodity markets, although it emphasised long-run decline rather than extended booms.

Later, analysts began to examine multi-decade patterns of rising and falling prices. Structural models of commodity prices observed that at major stages of economic development - such as the agricultural and industrial revolutions - commodity intensity tends to increase markedly, creating conditions for supercycles.4 These models distinguish between business cycles of a few years, investment cycles spanning roughly a decade, and longer supercycle components that can extend beyond 20 years.4 The supercycle lens gained prominence as researchers studied the commodity surge associated with China's breakneck urbanisation and industrialisation from the late 1990s to the late 2000s.

That China-driven episode became the archetype of a modern commodity supercycle: a powerful, sustained demand shock focused on energy, metals and bulk materials, amplified by long supply lead times and capital expenditure cycles. J.P. Morgan and other institutions have documented how this supercycle drove a 12-year uptrend in prices, culminating before the global financial crisis, followed by a comparably long down-cycle as supply eventually caught up and Chinese growth shifted to a less resource-intensive model.2,4

Academic and market theorists have since refined the concept. They argue that supercycles emerge when three elements coincide. First, there must be a structural, synchronised increase in demand, often tied to a global development episode or technological shift. Second, supply in key commodities must be constrained by geology, capital discipline, regulation or long project lead times. Third, macro-financial conditions - including real interest rates, inflation expectations and currency trends - must align to support investment flows into real assets. The question for today's transition is whether decarbonisation meets these criteria.

On the demand side, the clean tech revolution clearly resembles previous development stages in its resource intensity. J.P. Morgan notes that electric vehicles require significantly more minerals than internal combustion engine cars - roughly six times as much in aggregate when accounting for lithium, nickel, cobalt, manganese and graphite.1 Similarly, building solar and wind capacity, and the vast grid infrastructure to connect them, calls for much more copper and aluminium per unit of capacity than conventional power systems.1 The International Energy Agency's projections, which J.P. Morgan draws on, indicate that even under modest policy assumptions, renewable electricity capacity is set to increase by around 50% by 2030, with more ambitious net zero scenarios implying far steeper growth.1

Supply, however, has been shaped by a decade of caution. After the last supercycle ended, many mining and energy companies cut back capital expenditure, streamlined balance sheets and prioritised shareholder returns. Regulatory processes for new mines lengthened, environmental permitting became more stringent, and social expectations around land use and community impacts increased. The result is that bringing new supplies of copper, nickel or lithium online can take many years and substantial capital, creating a lag between price signals and physical supply.

Theorists of the investment cycle - often identified with work on 8 to 20-year intermediate commodity cycles - argue that such periods of underinvestment sow the seeds for the next up-cycle.4 When demand resurges due to a structural driver, constrained supply leads to persistent price pressures until investment, technology and substitution can rebalance the market. In the case of the energy transition, the requirement for large amounts of specific minerals, combined with concentrated supply in a small number of countries, intensifies this effect and introduces geopolitical considerations.

Another important strand of thought concerns the evolution of energy systems themselves. Analysts focusing on energy supercycles emphasise that transitions historically unfold over multiple decades and rarely proceed smoothly.3,4 Even as clean energy capacity expands rapidly, global energy demand continues to grow, and existing systems must meet rising consumption while new infrastructure is built. J.P. Morgan's energy research describes this as a multi-decade process of "generating and distributing the joules" required to both satisfy demand and progressively decarbonise.3 During this period, traditional energy sources often remain critical, creating complex price dynamics across oil, gas, coal and renewables-linked commodities.

Within this broader theoretical frame, the clean technology transition can be seen as a distinctive supercycle candidate. Unlike the China wave, which centred on industrialisation and urbanisation within one country, the net zero agenda is globally coordinated and policy-driven. It spans power generation, transport, buildings, industry and agriculture, and requires both new physical assets and digital infrastructure. Structural models referenced by J.P. Morgan note that such system-wide investment programmes have historically been associated with sustained periods of elevated commodity intensity.4

At the same time, there is active debate among economists and market strategists about the durability and breadth of any new supercycle. Some caution that efficiency gains, recycling and substitution could cap demand growth in certain minerals over time. Others point to innovation in battery chemistries, alternative materials and manufacturing methods that may reduce reliance on some critical inputs. Still others argue that policy uncertainty and potential fragmentation in global trade could disrupt smooth investment and demand trajectories. Theorists of supercycles emphasise that these are not immutable laws but emergent patterns that can be shaped by technology, politics and finance.

J.P. Morgan's perspective in the quoted insight acknowledges these uncertainties while underscoring the asymmetry in the coming decade. Even in conservative scenarios, their work suggests that demand for critical minerals rises substantially relative to recent history.1 Under more ambitious climate policies, the increase is far greater, and tightness in markets such as copper, nickel, cobalt and lithium appears likely, especially towards the end of the 2020s.1 Against this backdrop, natural resource companies with high-quality assets, disciplined capital allocation and credible sustainability strategies are positioned not as relics of the past, but as essential partners in delivering the energy transition.

This reframing has important implications for investors and corporates alike. For investors, it suggests that the traditional division between "old" resource-heavy industries and "new" clean tech sectors is too simplistic. The hardware of decarbonisation - from EV batteries and charging networks to grid-scale storage, wind turbines and solar farms - depends on a complex upstream ecosystem of miners, processors and materials specialists. For corporates, it highlights the strategic premium on securing access to critical inputs, managing long-term supply contracts, and integrating sustainability into resource development.

The quote from J.P. Morgan thus sits at the confluence of three intellectual streams: long-run theories of commodity supercycles, modern analysis of energy transition dynamics, and evolving views of how natural resource businesses fit into a low-carbon world. It encapsulates the idea that the path to net zero is not dematerialised; instead, it is anchored in physical assets, industrial capabilities and supply chains that must be financed, built and operated over many years. For those able to navigate this terrain - and for the theorists tracing its contours - the clean technology transition is not only an environmental imperative but also a defining economic narrative of the coming decades.

References

1. https://am.jpmorgan.com/hk/en/asset-management/adv/insights/market-insights/market-bulletins/clean-energy-investment/

2. https://www.foxbusiness.com/markets/biden-climate-change-fight-commodities-supercycle

3. https://www.jpmorgan.com/insights/global-research/commodities/energy-supercycle

4. https://www.jpmcc-gcard.com/digest-uploads/2021-summer/Page%2074_79%20GCARD%20Summer%202021%20Jerrett%20042021.pdf

5. https://am.jpmorgan.com/us/en/asset-management/institutional/card-list-libraries/sustainable-insights-climate-tab-us/

6. https://www.jpmorgan.com/insights/global-research/outlook/market-outlook

7. https://www.bscapitalmarkets.com/hungry-for-commodities-ndash-is-a-new-commodity-super-cycle-here.html

"We believe the clean technology transition is igniting a new supercycle in critical commodities, with natural resource companies emerging as winners." - Quote: J.P. Morgan

‌

‌

Term: Moltbot (formerly Clawdbot)

"Moltbot (formerly Clawdbot), a personal AI assistant, has gone viral within weeks of its launch, drawing thousands of users willing to tackle the technical setup required, even though it started as a scrappy personal project built by one developer for his own use." - Moltbot (formerly Clawdbot)

Moltbot (formerly Clawdbot) is an open-source, self-hosted personal AI assistant that runs continuously on your own hardware (for example a Mac mini, Raspberry Pi, old laptop, or low-cost cloud server) and connects to everyday messaging channels such as WhatsApp, Telegram, iMessage, or similar chat apps so that you can talk to it as if it were a human teammate rather than a traditional app.

Instead of living purely in the cloud like many mainstream assistants, it is designed as “an AI that actually does things”: it can execute real commands on your machine, including managing your calendar and email, browsing the web, organizing local files, and running terminal commands or scripts under your control.

At its core, Moltbot is an agentic system: you choose and configure the underlying large language model (Anthropic Claude, OpenAI models, or local models), and Moltbot wraps that model with tools and permissions so that the AI can observe state on your computer, decide on a sequence of actions, and iteratively move from a current state toward a desired state, much closer to a junior digital employee than a simple chatbot.

This agentic design makes it valuable for complex, multi-step workflows such as triaging inbound email, preparing briefings from documents and web sources, or orchestrating routine maintenance tasks, with the human defining objectives and guardrails while the assistant executes within those constraints. The project emphasizes a privacy-first, owner-controlled architecture: your prompts, files, and system access stay on the machine you control, with only model calls leaving the device when you opt to use a remote API, a proposition that has resonated strongly with developers and power users wary of funneling sensitive workstreams through opaque cloud ecosystems.

Moltbot’s origin story reinforces this positioning: it began in late 2025 as a scrappy personal project by Austrian engineer Peter Steinberger, best known for founding PSPDFKit (later rebranded Nutrient), a PDF and document-processing SDK that grew into infrastructure used by hundreds of millions of end users before being acquired by Insight Partners.

After exiting PSPDFKit and stepping away from day-to-day coding, Steinberger described a period of creative exhaustion, only to be pulled back into building when the momentum around modern AI—and especially Anthropic’s Claude models—convinced him he could turn “Claude Code into his computer,” effectively treating an AI coding environment and agent as the primary interface to his machine.

The first iteration of his assistant, Clawdbot (with its mascot character “Clawd,” a playful space lobster inspired by the name Claude), was built astonishingly quickly—early prototypes reportedly took around an hour—and shared as a personal tool that showed how an AI, wired into real system capabilities, could meaningfully reduce friction in managing a digital life.

Once Steinberger released the project publicly, traction was explosive: the repository rapidly attracted tens of thousands of GitHub stars (with some reports noting 50,000–60,000 stars within weeks), a fast-growing contributor base, and an active community Discord, as developers experimented with running Moltbot as a 24/7 “full-time AI employee” on cheap hardware.

Media coverage highlighted its distinctive blend of autonomy and practicality—“Claude with hands” rather than just a conversational agent—and its appeal to technically sophisticated users willing to accept a more involved setup process in exchange for real, system-level leverage over their workflows.

A trademark dispute over the similarity between “Clawd” and Anthropic’s “Claude” forced a rebrand to Moltbot in early 2026, but the underlying architecture, community, and “lobster soul” of the project remained intact, underscoring that the real innovation lies in the pattern of a self-hosted, action-oriented personal AI rather than in the specific name.

From a strategic perspective, Moltbot represents an emergent archetype: the personal AI infrastructure or “personal operating system” where an individual deploys a modular, agentic system on their own stack, integrates it tightly with their tools, and iteratively composes new capabilities over time.

This pattern shifts AI from being a generic productivity overlay to becoming part of the user’s core execution engine: instead of repeatedly solving the same problem, owners encapsulate solutions into reusable modules or “skills” that their assistant can call, turning one-off hacks into compounding leverage across research, coding, administration, and communication workflows.

In practice, this means that Moltbot is less a single product than a reference architecture for what it looks like when an individual or small team runs a persistent, deeply customized AI agent alongside them as a standing capability, blurring the line between software tool, co-worker, and infrastructure.

Strategy theorist: Daniel Miessler and the personal AI infrastructure thesis

Among contemporary strategic thinkers, Daniel Miessler offers one of the most closely aligned conceptual frameworks for understanding what Moltbot represents, through his work on “Personal AI Infrastructure (PAI)” and modular, agentic systems such as his own AI stack named “Kai.”

Miessler approaches AI not as a single application but as an evolving strategic platform: he describes PAI as an architecture built around a simple yet powerful iterative algorithm—current state - desired state via verifiable iteration—implemented through a constellation of agents, tools, and skills that together execute work on the owner’s behalf.

In his model, effective personal AI systems follow a clear hierarchy—goal - code - command-line tools - prompts - agents—so that automation is applied where it creates lasting leverage rather than superficial convenience, a philosophy that mirrors the way Moltbot encourages users first to define what they want done, then wire the assistant into concrete system actions.

Miessler’s backstory helps explain why his thinking is so relevant to Moltbot’s emergence. He is a long-time security and technology practitioner and the author of a widely read blog and podcast focused on the intersection of infosec, technology, and human behavior, where he has chronicled the gradual shift from isolated tools toward integrated, self-improving AI ecosystems.

Over the past several years he has documented building Kai as a unified agentic system to augment his own research and content creation, distilling a set of design principles: treat skills as modular units of domain expertise, maintain a custom history system that captures everything the system learns, and design both permanent specialist agents and dynamic agents that can be composed on demand for specific tasks.

These principles closely parallel what power users now attempt with Moltbot: they create persistent agents for recurring roles (research, coding, operations), attach them to specific tools and datasets, and then spin up temporary, task-specific flows as new problems arise, all running on personal or small-team infrastructure rather than within a vendor’s closed-box SaaS product.

The relationship between Miessler’s strategic ideas and Moltbot is best understood as conceptual rather than personal: Moltbot independently operationalizes many of the architectural patterns Miessler describes, turning the “personal AI infrastructure” thesis into a widely accessible, open-source implementation.

Both center on the same strategic shift: from AI as an occasional assistant that helps draft text, to AI as a continuously running, modular execution layer that acts across a user’s entire digital environment under explicit human objectives and constraints. In this sense, Miessler functions as a strategy theorist of the personal AI era, articulating the logic of agentic, owner-controlled systems, while Moltbot provides a vivid, viral case study of those ideas in practice—demonstrating how a single, well-designed personal AI stack can evolve from a private experiment into a community-driven platform that meaningfully changes how individuals and small firms execute work.

References

1. https://techcrunch.com/2026/01/27/everything-you-need-to-know-about-viral-personal-ai-assistant-clawdbot-now-moltbot/

2. https://metana.io/blog/what-is-moltbot-everything-you-need-to-know-in-2026/

3. https://dev.to/sivarampg/clawdbot-the-ai-assistant-thats-breaking-the-internet-1a47

4. https://www.macstories.net/stories/clawdbot-showed-me-what-the-future-of-personal-ai-assistants-looks-like/

5. https://www.youtube.com/watch?v=U8kXfk8en

"Moltbot (formerly Clawdbot), a personal AI assistant, has gone viral within weeks of its launch, drawing thousands of users willing to tackle the technical setup required, even though it started as a scrappy personal project built by one developer for his own use." - Term: Moltbot (formerly Clawdbot)

‌

‌

Quote: Kristalina Georgieva - Managing Director, IMF

"My main message here is the following: this is a tsunami hitting the labour market, and even in the best-prepared countries, I don't think we are prepared enough." - Kristalina Georgieva - Managing Director, IMF

Kristalina Georgieva's invocation of a "tsunami" represents far more than rhetorical flourish. Speaking at the World Economic Forum in Davos, the Managing Director of the International Monetary Fund articulated a diagnosis grounded in rigorous empirical analysis: artificial intelligence is not a speculative future threat but an immediate force already reshaping employment across every economy on earth. The metaphor itself carries profound significance-a tsunami denotes not merely disruption but overwhelming force, simultaneity, and inevitability. Critically, Georgieva's acknowledgement that even "best-prepared countries" remain inadequately equipped reveals the unprecedented scale and speed of this transformation.

The Scope of AI's Labour Market Impact

The International Monetary Fund's assessment provides quantifiable dimensions to this disruption. Georgieva's research indicates that 40 per cent of jobs globally will be impacted by artificial intelligence, with each affected role falling into one of three categories: enhancement (where AI augments human capability), elimination (where automation replaces human labour), or transformation (where roles are fundamentally altered). In advanced economies, this figure rises to 60 per cent-a staggering proportion that underscores the concentration of AI disruption in wealthy nations with greater technological infrastructure.

The distinction between jobs "touched" by AI and jobs eliminated proves crucial to understanding Georgieva's analysis. Enhancement and transformation may appear preferable to outright elimination, yet they still demand worker adjustment, skill development, and potentially geographic mobility. A job that is transformed but offers no wage improvement-as Georgieva has noted-may be economically worse for the worker even if technically retained. This nuance separates her analysis from both techno-optimist narratives and catastrophic predictions.

Perhaps most concerning is the asymmetric impact across age cohorts and development levels. Georgieva has specifically warned that AI is "like a tsunami hitting the labour market" for younger people entering the workforce. Entry-level positions-historically the gateway through which workers develop skills, build experience, and establish career trajectories-are precisely those most vulnerable to automation. This threatens to disrupt the intergenerational transmission of economic opportunity that has underpinned social mobility for decades.

Theoretical Foundations: The Labour Economics Lineage

Georgieva's analysis draws on decades of rigorous labour economics scholarship examining technological displacement and labour market adjustment. The intellectual lineage traces to David Autor, a leading MIT economist whose research has fundamentally shaped contemporary understanding of how technological change reshapes employment. Autor's seminal work demonstrates that whilst technology eliminates routine tasks-particularly routine cognitive work-it simultaneously creates demand for new skills and complementary labour. However, this adjustment is neither automatic nor painless; workers displaced from routine cognitive tasks often face years of unemployment or underemployment before transitioning to new roles, if they transition at all.

Autor's research, conducted over more than two decades, reveals a critical pattern: technological disruption creates a "hollowing out" of middle-skill employment. Routine cognitive tasks-data entry, basic accounting, straightforward analysis-have been progressively automated, whilst demand has polarised toward high-skill, high-wage positions and low-skill, low-wage service roles. This pattern, documented extensively in his work on computerisation and wage inequality, provides the empirical foundation for understanding why Georgieva emphasises that AI's impact cannot be left to market forces alone.

Building on Autor's framework, contemporary labour economists have extended analysis to examine the speed and scale of technological transition. The consensus among leading researchers-including Daron Acemoglu of MIT, who has written extensively on the relationship between technology and inequality-is that rapid technological change without deliberate policy intervention tends to exacerbate inequality rather than distribute gains broadly. Acemoglu's work emphasises that technology is not destiny; rather, the distributional outcomes of technological change depend fundamentally on institutional choices, regulatory frameworks, and investment in human capital.

Claudia Goldin, the 2023 Nobel Prize winner in Economics, has contributed essential research on the relationship between education, skills, and labour market outcomes across generations. Her historical analysis demonstrates that periods of rapid technological change have previously required corresponding investments in education and skills development. The gap between technological capability and educational preparedness has historically determined whether technological transitions benefit broad populations or concentrate gains among a narrow elite. Georgieva's warning about inadequate preparedness echoes Goldin's historical findings: without deliberate educational investment, technological transitions produce inequality.

The Productivity Paradox and Global Growth

Georgieva's analysis situates AI within a broader economic context of disappointing productivity growth. Global growth has remained underwhelming in recent years, with productivity growth stagnant except in the United States. This stagnation represents a fundamental economic problem: without productivity growth, living standards stagnate, and governments face fiscal pressures as tax revenues fail to grow with economic output.

AI represents, in Georgieva's assessment, the most potent force for reversing this trend. The IMF calculates that AI could boost global growth between 0.1 and 0.8 per cent annually-a seemingly modest range that carries enormous consequences. A 0.8 per cent productivity gain would restore growth to pre-pandemic levels, fundamentally altering global economic trajectories. Yet this upside scenario depends entirely on successful labour market adjustment and equitable distribution of AI's benefits. If AI generates productivity gains that concentrate wealth whilst displacing workers without adequate transition support, the aggregate growth figures mask profound distributional consequences.

This productivity question connects directly to Georgieva's warning about preparedness. The IMF's research indicates that one in ten jobs in advanced economies already require substantially new skills-a figure that will accelerate as AI deployment expands. Yet educational and training systems globally remain poorly aligned with AI-era skill demands. Northern European countries-particularly Finland, Sweden, and Denmark-have historically invested in continuous skills development and educational flexibility, positioning their workforces better for technological transition. Most other nations, by contrast, maintain educational systems designed for industrial-era employment patterns, where workers acquired specific skills early in their careers and applied them throughout working lives.

The Global Inequality Dimension

Perhaps the most consequential aspect of Georgieva's analysis concerns the "accordion of opportunities"-her term for the diverging economic trajectories between advanced and developing economies. The 60 per cent figure for advanced economies versus 20-26 per cent for low-income countries reflects not merely different levels of AI adoption but fundamentally different economic capacities and institutional frameworks.

Advanced economies possess the infrastructure, capital, and institutional capacity to invest in AI whilst simultaneously managing labour market transition. They have educational systems capable of rapid adaptation, financial resources to fund reskilling programmes, and social safety nets to cushion displacement. Low-income countries risk being left behind-neither benefiting from AI's productivity gains nor receiving the investment in skills and social protection that might cushion displacement. This dynamic threatens to widen the global inequality gap that has been a persistent feature of economic development since the industrial revolution.

Georgieva's concern reflects research by economists including Branko Milanovic, who has documented how technological change interacts with global inequality. Milanovic's work demonstrates that technological transitions have historically benefited capital owners and high-skill workers whilst displacing lower-skill workers. Without deliberate policy intervention-progressive taxation, investment in education, social protection-technological change tends to increase inequality both within and between nations.

The Skills Gap and Educational Mismatch

Georgieva's analysis reveals a critical finding: some countries have more demand for new skills than supply, whilst others have more supply than demand. This mismatch is not random; it reflects decades of educational investment decisions. Northern European countries, which have invested continuously in education and skills development, face less severe skills gaps. Emerging market and developing economies, which have often prioritised other investments, face more significant misalignment between labour supply and employer demand.

The nature of required skills further complicates adjustment. Approximately half of new skills demanded are information technology related-programming, data analysis, AI system management. The remaining skills span management, specific professional qualifications, and crucially, what Georgieva terms "learning how to learn." This last category proves essential because, as she emphasises, policymakers cannot assume they know what jobs of tomorrow will be. Rather than teaching particular knowledge, educational systems must cultivate adaptability and continuous learning capacity.

This pedagogical insight reflects research by Erik Brynjolfsson and Andrew McAfee, economists at MIT who have extensively studied the relationship between technological change and employment. Their research emphasises that in periods of rapid technological change, the ability to learn new skills matters more than possession of specific technical knowledge. Workers who can adapt, learn new tools, and transfer skills across domains fare better than those with deep expertise in narrow domains vulnerable to automation.

The Entry-Level Jobs Crisis

Georgieva's specific warning about entry-level positions deserves particular attention. AI tends to eliminate entry-level functions-the positions through which younger workers historically entered labour markets, developed experience, and progressed to more senior roles. This threatens to disrupt a fundamental mechanism of economic mobility and skills development.

The concern extends beyond immediate employment. Entry-level positions serve crucial functions beyond income generation: they provide work experience, develop professional networks, teach workplace norms and expectations, and signal to employers that workers possess basic competence. When AI eliminates these positions, younger workers face not merely reduced job availability but disrupted pathways to career development. A 25-year-old unable to secure entry-level experience faces substantially different career prospects than one who progresses through conventional career ladders.

Yet Georgieva's data also offers grounds for cautious optimism. Her research indicates that a 1 per cent increase in new skills leads to 1.3 per cent increase in overall employment. This suggests that skill development creates positive spillovers-workers with new skills generate demand for complementary services and lower-skilled labour, expanding employment opportunities across the economy. The fear that AI will shrink total employment, whilst understandable, is not yet supported by empirical evidence. Rather, the challenge is reshaping employment-ensuring that displaced workers can transition to new roles and that new opportunities emerge in sufficient quantity and geographic proximity to displaced workers.

Geopolitical and Strategic Dimensions

Georgieva's warning arrives amid broader economic fragmentation. Trade tensions, geopolitical competition, and the shift from a rules-based global economic order toward competing blocs create additional uncertainty. AI development is increasingly intertwined with strategic competition between major powers, particularly between the United States and China. This geopolitical dimension means that AI's labour market impact cannot be separated from questions of technological sovereignty, supply chain resilience, and economic security.

The strategic competition over AI development creates perverse incentives. Nations may prioritise rapid AI deployment to maintain competitive advantage, even when labour market adjustment remains incomplete. This dynamic could accelerate job displacement without corresponding investment in worker transition support, exacerbating the preparedness gap Georgieva identifies.

Policy Imperatives and the Preparedness Challenge

Georgieva's analysis suggests several imperatives for policymakers. First, labour market adjustment cannot be left to market forces alone; deliberate investment in education, training, and social protection is essential. Second, the distribution of AI's benefits matters as much as aggregate productivity gains; without attention to equity, AI could deepen inequality within and between nations. Third, regulation and ethical frameworks must be established proactively rather than reactively, shaping AI development toward socially beneficial outcomes.

The preparedness challenge Georgieva emphasises reflects a fundamental asymmetry: AI development proceeds at technological pace, whilst educational systems, labour market institutions, and policy frameworks change at institutional pace. Educational systems require years to redesign curricula, train teachers, and produce graduates with new skills. Labour market institutions-unemployment insurance systems, pension arrangements, occupational licensing frameworks-were designed for industrial-era employment patterns and adapt slowly to new realities. Policy frameworks require legislative action, which moves even more slowly.

This temporal mismatch between technological change and institutional adaptation explains why even well-prepared countries remain inadequately equipped. Finland, Sweden, and Denmark-the countries Georgieva identifies as best positioned-have invested continuously in education and skills development, yet even these nations acknowledge that current preparedness remains insufficient for the scale and speed of AI-driven change.

The Broader Economic Context

Georgieva's warning must be understood within the context of her broader economic outlook. The IMF has upgraded global growth projections to 3.3 per cent for 2026 and 3.2 per cent for 2027, yet these figures fall short of pre-pandemic historical averages of 3.8 per cent. The primary constraint on growth is productivity-the output generated per unit of labour and capital. Without productivity growth, economies cannot generate sufficient income growth to fund public services, support ageing populations, or improve living standards.

AI represents the most significant potential source of productivity growth available to policymakers. Yet realising this potential requires not merely deploying AI technology but managing the labour market transition it necessitates. Georgieva's warning that even best-prepared countries remain inadequately equipped reflects recognition that the challenge is not technological but institutional and political-whether societies can muster the will to invest in worker transition, education, and social protection whilst simultaneously deploying transformative technology.

The stakes could hardly be higher. Successful management of AI's labour market impact could restore productivity growth, accelerate global development, and improve living standards broadly. Failure to manage this transition adequately could concentrate AI's benefits among capital owners and high-skill workers whilst displacing millions of workers without adequate transition support, deepening inequality and potentially destabilising societies. Georgieva's metaphor of a tsunami captures this duality: the same force that could lift all boats could also devastate those unprepared for its arrival.

References

1. https://globaladvisors.biz/2026/01/23/quote-kristalina-georgieva-managing-director-imf/

2. https://www.weforum.org/podcasts/meet-the-leader/episodes/ai-skills-global-economy-imf-kristalina-georgieva/

3. https://fortune.com/2026/01/23/imf-chief-warns-ai-tsunami-entry-level-jobs-gen-z-middle-class/

4. https://timesofindia.indiatimes.com/education/careers/news/ai-is-hitting-entry-level-jobs-like-a-tsunami-imf-chief-kristalina-georgieva-urges-students-to-prepare-for-change/articleshow/127381917.cms

"My main message here is the following: this is a tsunami hitting the labour market, and even in the best-prepared countries, I don't think we are prepared enough." - Quote: Kristalina Georgieva - Managing Director, IMF

‌

‌

Term: Black Scholes

"The Black-Scholes model (or Black-Scholes-Merton model) is a fundamental mathematical formula that calculates the theoretical fair price of European-style options, using inputs like the underlying stock price, strike price, time to expiration, risk-free interest rate and volatility." - Black Scholes

Black-Scholes Model (Black-Scholes-Merton Model)

The Black-Scholes model, also known as the Black-Scholes-Merton model, is a pioneering mathematical framework for pricing European-style options, which can only be exercised at expiration. It derives a theoretical fair value for call and put options by solving a parabolic partial differential equation—the Black-Scholes equation—under risk-neutral valuation, replacing the asset's expected return with the risk-free rate to eliminate arbitrage opportunities.1,2,5

Core Formula and Inputs

The model prices a European call option ( C ) as:

C = S_0 N(d_1) - K e^ N(d_2)

where:

  • ( S_0 ): current price of the underlying asset (e.g., stock).3,7
  • ( K ): strike price.5,7
  • ( T ): time to expiration (in years).5,7
  • ( r ): risk-free interest rate (constant).3,7
  • (\sigma ): volatility of the underlying asset's returns (annualised).2,7
  • ( N(\cdot) ): cumulative distribution function of the standard normal distribution.
  • d_1 = \frac{\ln(S_0 / K) + (r + \sigma^2 / 2)T}{\sigma \sqrt}
  • d_2 = d_1 - \sigma \sqrt1,2,5

A symmetric formula exists for put options. The model assumes log-normal distribution of stock prices, meaning continuously compounded returns are normally distributed:

\ln S_T \sim N\left( \ln S_0 + \left( \mu - \frac{\sigma^2} \right)T, \sigma^2 T \right)

where ( \mu ) is the expected return (replaced by ( r ) in risk-neutral pricing).2

Key Assumptions

The model rests on idealised conditions for mathematical tractability:

  • Efficient markets with no arbitrage and continuous trading.1,3
  • Log-normal asset returns (prices cannot go negative).2,3
  • Constant risk-free rate ( r ) and volatility ( \sigma ).3
  • No dividends (original version; later adjusted by replacing ( S_0 ) with ( S_0 e^ ) for continuous dividend yield ( q ), or subtracting present value of discrete dividends).2,3
  • No transaction costs, taxes, or short-selling restrictions; frictionless trading with a risky asset (stock) and riskless asset (bond).1,3
  • European exercise only (no early exercise).1,5

These enable delta hedging: dynamically adjusting a portfolio of the underlying asset and riskless bond to replicate the option's payoff, making its price unique.1

Extensions and Limitations

  • Dividends: Adjust ( S_0 ) to ( S_0 - PV(\text) ) or use yield ( q ).2
  • American options: Use Black's approximation, taking the maximum of European prices with/without dividends.2
  • Greeks: Measures sensitivities like delta (\Delta = N(d_1)), vega (volatility sensitivity), etc., for risk management.4
    Limitations include real-world violations (e.g., volatility smiles, jumps, stochastic rates), but it remains foundational for derivatives trading, valuation (e.g., 409A for startups), and extensions like binomial models.3,5,7

Best Related Strategy Theorist: Myron Scholes

Myron Scholes (b. 1941) is the most directly linked theorist, co-creator of the model and Nobel laureate whose work revolutionised options trading and risk management strategies.

Biography

Born in Timmins, Ontario, Canada, Scholes earned a BA (1962), MA (1964), and PhD (1969) in finance from the University of Chicago, studying under Nobel winners like Merton Miller. He taught at MIT (1968–1972, collaborating with Fischer Black and Robert Merton), Stanford (1973–1996), and later Oxford. In 1990, he co-founded Long-Term Capital Management (LTCM), a hedge fund using advanced models (including Black-Scholes variants) for fixed-income arbitrage, which amassed $4.7 billion in assets before collapsing in 1998 due to leverage and Russian debt crisis—prompting a $3.6 billion Federal Reserve bailout. Scholes received the 1997 Nobel Prize in Economics (shared with Merton; Black deceased), cementing his legacy. He now advises at Platinum Grove Asset Management and philanthropically supports education.1

Relationship to the Term

Scholes co-authored the seminal 1973 paper "The Pricing of Options and Corporate Liabilities" with Fischer Black (1938–1995), an economist at Arthur D. Little and later Goldman Sachs, who conceived the core hedging insight but died before the Nobel. Robert C. Merton (b. 1944, Merton's 1973 paper extended it to dividends and American options) formalised continuous-time aspects, earning co-credit. Their breakthrough—published amid nascent options markets (CBOE opened 1973)—enabled risk-neutral pricing and dynamic hedging, transforming derivatives from speculative to hedgeable instruments. Scholes' strategic insight: options prices reflect volatility alone under no-arbitrage, powering strategies like volatility trading, portfolio insurance, and structured products at banks/hedge funds. LTCM exemplified (and exposed limits of) scaling these via leverage.1,2,5

 

References

1. https://en.wikipedia.org/wiki/Black%E2%80%93Scholes_model

2. https://analystprep.com/study-notes/frm/part-1/valuation-and-risk-management/the-black-scholes-merton-model/

3. https://carta.com/learn/startups/equity-management/black-scholes-model/

4. https://www.columbia.edu/~mh2078/FoundationsFE/BlackScholes.pdf

5. https://www.sofi.com/learn/content/what-is-the-black-scholes-model/

6. https://gregorygundersen.com/blog/2024/09/28/black-scholes/

7. https://corporatefinanceinstitute.com/resources/derivatives/black-scholes-merton-model/

8. https://www.youtube.com/watch?v=EEM2YBzH-2U

9. https://www.khanacademy.org/economics-finance-domain/core-finance/derivative-securities/black-scholes/v/introduction-to-the-black-scholes-formula

 

"The Black-Scholes model (or Black-Scholes-Merton model) is a fundamental mathematical formula that calculates the theoretical fair price of European-style options, using inputs like the underlying stock price, strike price, time to expiration, risk-free interest rate and volatility." - Term: Black Scholes

‌

‌

Quote: Reid Hoffman - LinkedIn co-founder

"The fastest way to change yourself is to hang out with people who are already the way you want to be." - Reid Hoffman - LinkedIn co-founder

Reid Hoffman, best known as the co-founder of LinkedIn, has spent his career at the intersection of technology, networks and human potential. His work is grounded in a deceptively simple observation: who you spend time with fundamentally shapes who you become. This quote, popularised through his book The Startup of You: Adapt to the Future, Invest in Yourself, and Transform Your Career, distils a central theme in his thinking - that careers and identities are not fixed paths, but evolving ventures built in relationship with others.2

Reid Hoffman: from philosopher to founder

Born in 1967 in California, Reid Hoffman studied at Stanford University, focusing on symbolic systems, a multidisciplinary programme that combines computer science, linguistics, philosophy and cognitive psychology. He later pursued a masters degree in philosophy at Oxford, with a particular interest in how individuals and societies create meaning and institutions. That philosophical grounding is visible in the way he talks about networks, trust and social systems, and in his tendency to move quickly from product features to questions of ethics and social impact.

Hoffman initially imagined becoming an academic, but he concluded that entrepreneurship offered a more direct way to shape the world. After early roles at Apple and Fujitsu, he founded his first company, SocialNet, in the late 1990s. It was an ambitious attempt at an online social platform before the wider market was ready. The experience taught him, by his own account, about timing, product-market fit and the brutal realities of execution. Those lessons would later inform his investment philosophy and his advice to founders.

He joined PayPal in its early days, becoming one of the core members of what later came to be known as the "PayPal Mafia". As executive vice president responsible for business development, he helped navigate the company through growth, regulatory challenges and its eventual acquisition by eBay. This period sharpened his understanding of scaling networks, managing hypergrowth and building resilient organisational cultures. It also cemented his personal network with future founders of Tesla, SpaceX, Yelp, YouTube and Palantir, among others - a living demonstration of his own quote about proximity to people who embody the future you want to be part of.

In 2002, Hoffman co-founded LinkedIn, a professional networking platform that would come to dominate global online professional identity. The idea was radical at the time: that CVs could become living, networked artefacts; that careers could be navigated not just through internal company ladders but through visible webs of relationships; and that trust in business could be mediated through reputation signals and endorsements. LinkedIn grew steadily rather than explosively, reflecting Hoffmans view that durable networks are built on cumulative trust, not just viral growth. The platform embodies the logic of his quote: it is structurally designed to make it easier to find and connect with people whose careers, skills and values you aspire to emulate.2

After LinkedIn scaled and eventually sold to Microsoft, Hoffman became a partner at Greylock Partners, one of Silicon Valleys most established venture capital firms. There he focused on early-stage technology companies, particularly those with strong network effects. He also launched the podcast Masters of Scale, where he interviews founders and leaders about how they built their organisations. The show reinforces the same message: personal and organisational change rarely happens in isolation; it occurs in communities, teams and ecosystems that stretch what people believe is possible.

Context of the quote: The Startup of You and career as a startup

The quote appears in the context of Hoffmans book The Startup of You, co-authored with Ben Casnocha. In the book he argues that every individual, not just entrepreneurs, should think of themselves as the CEO of their own career, applying the mindset and tools of a startup to their working life. That means:

  • Adapting continuously to change rather than relying on a single, static career plan.
  • Investing in relationships as core professional assets, not peripheral extras.
  • Running small experiments to test new directions, skills and opportunities.
  • Building a "networked intelligence" - using the perspectives of others to navigate uncertainty.2

Within that framework, the quote about hanging out with people who are already the way you want to be is not a throwaway line. It is a strategy. Hoffman argues that exposing yourself to people who embody the skills, attitudes and standards you aspire to accelerates learning in several ways:

  • It normalises behaviours that previously felt aspirational or out of reach.
  • It provides a live reference model for decision-making, not just abstract advice.
  • It reinforces identity shifts - you start to see yourself as part of a community where certain behaviours are standard.
  • It opens doors to opportunities that flow along relationship lines.

In other words, the fastest way to change yourself is not merely to decide differently, but to embed yourself in different networks. This reflects Hoffmans broader belief that networks are not just social graphs; they are engines for personal transformation.

The idea behind the quote: why people shape who we become

The deeper logic behind Hoffmans quote sits at the convergence of several strands of research and theory about how human beings change:

  • We internalise norms and expectations from our groups and reference communities.
  • Identity is co-created in interaction with others, not just chosen privately.
  • Behaviours spread through networks via imitation, modelling and subtle social cues.
  • Access to information, opportunities and challenges is heavily mediated by relationships.

Hoffmans framing is distinctly practical. Rather than focusing on abstract self-improvement, he suggests a leverage point: choose your environment and your companions with intent. If you want to become more entrepreneurial, spend time with founders. If you want to become more disciplined, work alongside people who treat discipline as a norm. If you want a more global perspective, immerse yourself in networks that think and operate globally.

This is not, in his usage, about social climbing or mimicry. It is about recognising that the most powerful behavioural technologies we have are other people, and aligning ourselves with those whose example pulls us towards our better, more ambitious selves.

Related thinkers: how theory supports Hoffmans insight

Though Hoffmans quote arises from his own experience in technology and entrepreneurship, the underlying idea is echoed across psychology, sociology, economics and network science. A number of leading theorists and researchers provide a rich backstory to the principle that the people around us are key drivers of personal change.

1. Social learning and modelling - Albert Bandura

Albert Bandura, one of the most influential psychologists of the 20th century, developed social learning theory and the concept of self-efficacy. He showed that people learn new behaviours by observing others, especially when those others are perceived as competent, similar or high-status. In his famous Bobo doll experiments, children who saw adults behaving aggressively towards a doll were more likely to imitate that behaviour.

Bandura argued that much of human learning is vicarious. We watch, internalise and then reproduce behaviours without needing to experience all the consequences ourselves. In that light, Hoffmans advice to spend time with people who are already the way you want to be is essentially a prescription to leverage social modelling in your favour: choose role models and peer groups whose behaviour you want to absorb, because you will absorb it, consciously or not.

Banduras notion of self-efficacy - the belief in ones capability to achieve goals - is also relevant. Seeing people like you succeed in domains you care about, or live in ways you aspire to, is one of the strongest sources of increased self-efficacy. It tells you, implicitly: this is possible, and it may be possible for you.

2. Social comparison and reference groups - Leon Festinger

Leon Festinger, a social psychologist, introduced social comparison theory in the 1950s. He proposed that individuals evaluate their own opinions and abilities by comparing themselves with others, particularly when objective standards are absent or ambiguous. Reference groups - the people we implicitly choose as benchmarks - shape our sense of what counts as success, effort or normality.

Hoffmans quote can be read as deliberate reference-group engineering. If you choose a reference group made up of people who are already living or behaving in ways you admire, then your internal comparisons will continually pull you in that direction. Your standard of "normal" shifts upward. Over time, subtle adjustments in expectations, goals and self-assessment accumulate into substantive change.

3. Social networks and contagion - Nicholas Christakis and James Fowler

In their work on social contagion, Nicholas Christakis and James Fowler used large-scale longitudinal data to show that behaviours and states - from obesity to smoking, happiness and loneliness - can spread through social networks across multiple degrees of separation. If a friend of your friend becomes obese, for instance, your own likelihood of weight gain measurably changes, even if you never meet that intermediary person.

Their research suggests that networks do not merely reflect individual traits; they actively participate in shaping them. Norms, emotions and behaviours travel across the ties between people. In that sense, Hoffmans counsel is aligned with a network-science perspective: by embedding yourself in networks populated by people with the traits you seek, you are positioning yourself in the path of favourable social contagion.

4. Social capital and weak ties - Mark Granovetter and Robert Putnam

Mark Granovetters seminal work on "The Strength of Weak Ties" showed that weak connections - acquaintances rather than close friends - are disproportionately important for accessing new information, opportunities and perspectives. They bridge different clusters within a network and act as conduits between otherwise separated groups.

Robert Putnam, in his work on social capital, differentiated between bonding capital (strong ties within a close group) and bridging capital (ties that connect us across different groups). Bridging capital is particularly valuable for innovation and change, because it exposes individuals to unfamiliar norms, skills and possibilities.

Hoffmans own career illustrates these principles. His decision to join and later invest in networks of founders, technologists and global business leaders gave him an unusually rich set of weak and strong ties. When he advises people to spend time with those who already are how they want to be, he is, in effect, recommending the intentional cultivation of high-quality social capital in domains that matter for your growth.

5. Identity and habit change - James Clear, Charles Duhigg and behavioural science

Contemporary writers on habits and behaviour, such as James Clear and Charles Duhigg, synthesise research from psychology and behavioural economics to explain why environment and identity are so crucial in change. They emphasise that:

  • Habits are heavily shaped by context and cues.
  • We tend to adopt the habits of the groups we belong to.
  • Sustained change often follows a shift in identity - a new answer to the question "Who am I?"

Clear, for example, argues that "the people you surround yourself with are a reflection of who you are, or who you want to be" - an idea strongly resonant with Hoffmans quote. Belonging to a group where a desired behaviour is normal lowers the friction of doing that behaviour yourself. You become the kind of person who does these things, because that is what "people like us" do.

Hoffman extends this line of thought into the professional realm: if you want to be the sort of person who takes intelligent risks, builds companies or adapts well to technological change, put yourself in communities where those behaviours are routine, admired and expected.

6. Deliberate practice and expert communities - K. Anders Ericsson

K. Anders Ericsson, known for his work on expert performance and deliberate practice, showed that world-class performance is rarely a product of raw talent alone. It depends on structured, effortful practice over time, typically supported by coaches, mentors and high-level peer groups. Elite performers tend to train in environments where excellence is normalised and where feedback is rapid, precise and demanding.

Viewed through this lens, Hoffmans quote points to the importance of expert communities for accelerating growth. Being around people who are already operating at the level you aspire to does more than inspire; it enables a more rigorous, feedback-rich form of practice. It shrinks the gap between aspiration and reality by surrounding you with tangible exemplars and high expectations.

7. Entrepreneurial ecosystems - AnnaLee Saxenian and cluster theory

Research on regional innovation systems and entrepreneurial ecosystems, such as AnnaLee Saxenians work on Silicon Valley, illuminates how geographic and social concentration of talent drives innovation. Silicon Valley became uniquely productive not just because of capital or universities, but because it created dense networks of engineers, founders, investors and service providers who interacted constantly, shared norms and recycled experience across companies.

Hoffmans career is intertwined with this ecosystem logic. His own network, forged through PayPal, LinkedIn and Greylock, reflects the power of clusters where people who already embody entrepreneurial behaviours interact daily. When he advises others to "hang out" with people who are already how they want to be, he is, in effect, recommending that individuals build their own personal micro-ecosystems of aspiration, whether or not they live in Silicon Valley.

The personal strategy embedded in the quote

Hoffmans quote can serve as a practical checklist for personal and professional growth:

  • Clarify the change you want - skills, mindset, values, level of responsibility or kind of impact.
  • Identify living examples - people who already embody that change, ideally at different stages and in different contexts.
  • Shift your time allocation - invest more time in conversations, projects and communities with those people and less in environments that reinforce your old patterns.
  • Contribute, not just consume - add value to those relationships; become useful to the people you want to learn from.
  • Allow your identity to update - notice when you start to see yourself as part of a new tribe and let that guide your choices.

For Hoffman, the network is not a backdrop to personal change; it is the primary medium through which change happens. His own journey - from philosopher to entrepreneur, from founder to investor and public intellectual - unfolded through successive communities of people who were already operating in the ways he wanted to learn. The quote captures that lived experience in a single, portable principle: to change yourself at speed, change who you are with.

References

1. https://quotefancy.com/quote/1241059/Reid-Hoffman-The-fastest-way-to-change-yourself-is-to-hang-out-with-people-who-are

2. https://www.goodreads.com/quotes/11473244-the-fastest-way-to-change-yourself-is-to-hang-out

3. https://www.azquotes.com/quote/520979

“The fastest way to change yourself is to hang out with people who are already the way you want to be.” - Quote: Reid Hoffman

‌

‌

Quote: Satya Nadella - CEO, Microsoft

"Just imagine if your firm is not able to embed the tacit knowledge of the firm in a set of weights in a model that you control... you're leaking enterprise value to some model company somewhere." - Satya Nadella - CEO, Microsoft

Satya Nadella's assertion about enterprise sovereignty represents a fundamental reorientation in how organisations must think about artificial intelligence strategy. Speaking at the World Economic Forum in Davos in January 2026, the Microsoft CEO articulated a principle that challenges conventional wisdom about data protection and corporate control in the AI age. His argument centres on a deceptively simple but profound distinction: the location of data centres matters far less than the ability of a firm to encode its unique organisational knowledge into AI models it owns and controls.

The Context of Nadella's Intervention

Nadella's remarks emerged during a high-profile conversation with Laurence Fink, CEO of BlackRock, at the 56th Annual Meeting of the World Economic Forum. The discussion occurred against a backdrop of mounting concern about whether the artificial intelligence boom represents genuine technological transformation or speculative excess. Nadella framed the stakes explicitly: "For this not to be a bubble, by definition, it requires that the benefits of this are much more evenly spread." The conversation with Fink, one of the world's most influential voices on capital allocation and corporate governance, provided a platform for Nadella to articulate what he termed "the topic that's least talked about, but I feel will be most talked about in this calendar year"-the question of firm sovereignty in an AI-driven economy.

The timing of this intervention proved significant. By early 2026, the initial euphoria surrounding large language models and generative AI had begun to encounter practical constraints. Organisations worldwide were grappling with the challenge of translating AI capabilities into measurable business outcomes. Nadella's contribution shifted the conversation from infrastructure and model capability to something more fundamental: the strategic imperative of organisational control over AI systems that encode proprietary knowledge.

Understanding Tacit Knowledge and Enterprise Value

Central to Nadella's argument is the concept of tacit knowledge-the accumulated, often uncodified understanding that emerges from how people work together within an organisation. This includes the informal processes, institutional memory, decision-making heuristics, and domain expertise that distinguish one firm from another. Nadella explained this concept by reference to what firms fundamentally do: "it's all about the tacit knowledge we have by working as people in various departments and moving paper and information."

The critical insight is that this tacit knowledge represents genuine competitive advantage. When a firm fails to embed this knowledge into AI models it controls, that advantage leaks away. Instead of strengthening the organisation's position, the firm becomes dependent on external model providers-what Nadella termed "leaking enterprise value to some model company somewhere." This dependency creates a structural vulnerability: the organisation's competitive differentiation becomes hostage to the capabilities and pricing decisions of third-party AI vendors.

Nadella's framing inverts the conventional hierarchy of concerns about AI governance. Policymakers and corporate security teams have traditionally prioritised data sovereignty-ensuring that sensitive information remains within national or corporate boundaries. Nadella argues this focus misses the more consequential question. The physical location of data centres, he stated bluntly, is "the least important thing." What matters is whether the firm possesses the capability to translate its distinctive knowledge into proprietary AI models.

The Structural Transformation of Information Flow

Nadella's argument gains force when situated within his broader analysis of how AI fundamentally restructures organisations. He described AI as creating "a complete inversion of how information is flowing in the organisation." Traditional corporate hierarchies operate through vertical information flows: data and insights move upward through departments and specialisations, where senior leaders synthesise information and make decisions that cascade downward.

AI disrupts this architecture. When knowledge workers gain access to what Nadella calls "infinite minds"-the ability to tap into vast computational reasoning power-information flows become horizontal and distributed. This flattening of hierarchies creates both opportunity and risk. The opportunity lies in accelerated decision-making and the democratisation of analytical capability. The risk emerges when organisations fail to adapt their structures and processes to this new reality. More critically, if firms cannot embed their distinctive knowledge into models they control, they lose the ability to shape how this new information flow operates within their own context.

This structural transformation explains why Nadella emphasises what he calls "context engineering." The intelligence layer of any AI system, he argues, "is only as good as the context you give it." Organisations must learn to feed their proprietary knowledge, decision frameworks, and domain expertise into AI systems in ways that amplify rather than replace human judgment. This requires not merely deploying off-the-shelf models but developing the organisational capability to customise and control AI systems around their specific knowledge base.

The Sovereignty Framework: Beyond Geography

Nadella's reconceptualisation of sovereignty represents a significant departure from how policymakers and corporate leaders have traditionally understood the term. Geopolitical sovereignty concerns have dominated discussions of AI governance-questions about where data is stored, which country's regulations apply, and whether foreign entities can access sensitive information. These concerns remain legitimate, but Nadella argues they address a secondary question.

True sovereignty in the AI era, by his analysis, means the ability of a firm to encode its competitive knowledge into models it owns and controls. This requires three elements: first, the technical capability to train and fine-tune AI models on proprietary data; second, the organisational infrastructure to continuously update these models as the firm's knowledge evolves; and third, the strategic discipline to resist the temptation to outsource these capabilities to external vendors.

The stakes of this sovereignty question extend beyond individual firms. Nadella frames it as a matter of enterprise value creation and preservation. When firms leak their tacit knowledge to external model providers, they simultaneously transfer the economic value that knowledge generates. Over time, this creates a structural advantage for the model companies and a corresponding disadvantage for the organisations that depend on them. The firm becomes a consumer of AI capability rather than a creator of competitive advantage through AI.

The Legitimacy Challenge and Social Permission

Nadella's argument about enterprise sovereignty connects to a broader concern he articulated about AI's long-term viability. He warned that "if we are not talking about health outcomes, education outcomes, public sector efficiency, private sector competitiveness, we will quickly lose the social permission to use scarce energy to generate tokens." This framing introduces a crucial constraint: AI's continued development and deployment depends on demonstrable benefits that extend beyond technology companies and their shareholders.

The question of firm sovereignty becomes relevant to this legitimacy challenge. If AI benefits concentrate among a small number of model providers whilst other organisations become dependent consumers, the technology risks losing public and political support. Conversely, if firms across the economy develop the capability to embed their knowledge into AI systems they control, the benefits of AI diffuse more broadly. This diffusion becomes the mechanism through which AI maintains its social licence to operate.

Nadella identified "skilling" as the limiting factor in this diffusion process. How broadly people across organisations develop capability in AI determines how quickly benefits spread. This connects directly to the sovereignty question: organisations that develop internal capability to control and customise AI systems create more opportunities for their workforce to develop AI skills. Those that outsource AI to external providers create fewer such opportunities.

Leading Theorists and Intellectual Foundations

Nadella's argument draws on and extends several streams of organisational and economic theory. The concept of tacit knowledge itself originates in the work of Michael Polanyi, the Hungarian-British polymath who argued in his 1966 work The Tacit Dimension that "we know more than we can tell." Polanyi distinguished between explicit knowledge-information that can be codified and transmitted-and tacit knowledge, which resides in practice, experience, and embodied understanding. This distinction proved foundational for subsequent research on organisational learning and competitive advantage.

Building on Polanyi's framework, scholars including David Teece and Ikujiro Nonaka developed theories of how organisations create and leverage knowledge. Teece's concept of "dynamic capabilities"-the ability of firms to integrate, build, and reconfigure internal and external competencies-directly parallels Nadella's argument about embedding tacit knowledge into AI models. Nonaka's research on knowledge creation in Japanese firms emphasised the importance of converting tacit knowledge into explicit forms that can be shared and leveraged across organisations. Nadella's argument suggests that AI models represent a new mechanism for this conversion: translating tacit organisational knowledge into explicit algorithmic form.

The concept of "firm-specific assets" in strategic management theory also underpins Nadella's reasoning. Scholars including Edith Penrose and later resource-based theorists argued that competitive advantage derives from assets and capabilities that are difficult to imitate and specific to particular organisations. Nadella extends this logic to the AI era: the ability to embed firm-specific knowledge into proprietary AI models becomes itself a firm-specific asset that generates competitive advantage.

More recently, scholars studying digital transformation and platform economics have grappled with questions of control and dependency. Researchers including Shoshana Zuboff have examined how digital platforms concentrate power and value by controlling the infrastructure through which information flows. Nadella's argument about enterprise sovereignty can be read as a response to these concerns: organisations must develop the capability to control their own AI infrastructure rather than becoming dependent on platform providers.

The concept of "information asymmetry" from economics also illuminates Nadella's argument. When firms outsource AI to external providers, they create information asymmetries: the model provider possesses detailed knowledge of how the firm's data and knowledge are being processed, whilst the firm itself may lack transparency into the model's decision-making processes. This asymmetry creates both security risks and strategic vulnerability.

Practical Implications and Organisational Change

Nadella's argument carries significant implications for how organisations should approach AI strategy. Rather than viewing AI primarily as a technology to be purchased from external vendors, firms should conceptualise it as a capability to be developed internally. This requires investment in three areas: technical infrastructure for training and deploying models; talent acquisition and development in machine learning and data science; and organisational redesign to align workflows with how AI systems operate.

The last point proves particularly important. Nadella emphasised that "the mindset we as leaders should have is, we need to think about changing the work-the workflow-with the technology." This represents a significant departure from how many organisations have approached technology adoption. Rather than fitting new technology into existing workflows, organisations must redesign workflows around how AI operates. This includes flattening information hierarchies, enabling distributed decision-making, and creating feedback loops through which AI systems continuously learn from organisational experience.

Nadella also introduced the concept of a "barbell adoption" strategy. Startups, he noted, adapt easily to AI because they lack legacy systems and established workflows. Large enterprises possess valuable assets and accumulated knowledge but face significant change management challenges. The barbell approach suggests that organisations should pursue both paths simultaneously: experimenting with new AI-native processes whilst carefully managing the transition of legacy systems.

The Measurement Challenge: Tokens per Dollar per Watt

Nadella introduced a novel metric for evaluating AI's economic impact: "tokens per dollar per watt." This metric captures the efficiency with which organisations can generate computational reasoning power relative to energy consumption and cost. The metric reflects Nadella's argument that AI's economic value depends not on the sophistication of models but on how efficiently organisations can deploy and utilise them.

This metric also connects to the sovereignty question. Organisations that control their own AI infrastructure can optimise this metric for their specific needs. Those dependent on external providers must accept the efficiency parameters those providers establish. Over time, this difference in optimisation capability compounds into significant competitive advantage.

The Broader Economic Transformation

Nadella situated his argument about enterprise sovereignty within a broader analysis of how AI transforms economic structure. He drew parallels to previous technological revolutions, particularly the personal computing era. Steve Jobs famously described the personal computer as a "bicycle for the mind"-a tool that amplified human capability. Bill Gates spoke of "information at your fingertips." Nadella argues that AI represents these concepts "10x, 100x" more powerful.

However, this amplification of capability only benefits organisations that can control how it operates within their context. When firms outsource AI to external providers, they forfeit the ability to shape how this amplification occurs. They become consumers of capability rather than creators of competitive advantage.

Nadella's vision of AI diffusion requires what he terms "ubiquitous grids of energy and tokens"-infrastructure that makes AI capability as universally available as electricity. However, this infrastructure alone proves insufficient. Organisations must also develop the internal capability to embed their knowledge into AI systems. Without this capability, even ubiquitous infrastructure benefits only those firms that control the models running on it.

Conclusion: Knowledge as the New Frontier

Nadella's argument represents a significant reorientation in how organisations should think about AI strategy and competitive advantage. Rather than focusing on data location or infrastructure ownership, firms should prioritise their ability to embed proprietary knowledge into AI models they control. This shift reflects a deeper truth about how AI creates value: not through raw computational power or data volume, but through the ability to translate organisational knowledge into algorithmic form that amplifies human decision-making.

The sovereignty question Nadella articulated-whether firms can embed their tacit knowledge into models they control-will likely prove central to AI strategy for years to come. Organisations that develop this capability will preserve and enhance their competitive advantage. Those that outsource this capability to external providers risk gradually transferring their distinctive knowledge and the value it generates to those providers. In an era when AI increasingly mediates how organisations operate, the ability to control the models that encode organisational knowledge becomes itself a fundamental source of competitive advantage and strategic sovereignty.

References

1. https://www.teamday.ai/ai/satya-nadella-davos-ai-diffusion-larry-fink

2. https://dig.watch/event/world-economic-forum-2026-at-davos/conversation-with-satya-nadella-ceo-of-microsoft

3. https://www.youtube.com/watch?v=zyNWbPBkq6E

4. https://www.youtube.com/watch?v=1co3zt3-r7I

5. https://www.theregister.com/2026/01/21/nadella_ai_sovereignty_wef/

6. https://fortune.com/2026/01/20/is-ai-a-bubble-satya-nadella-microsoft-ceo-new-knowledge-worker-davos-fink/

"Just imagine if your firm is not able to embed the tacit knowledge of the firm in a set of weights in a model that you control... you're leaking enterprise value to some model company somewhere." - Quote: Satya Nadella - CEO, Microsoft

‌

‌

Term: Jagged Edge of AI

"The "jagged edge of AI" refers to the inconsistent and uneven nature of current artificial intelligence, where models excel at some complex tasks (like writing code) but fail surprisingly at simpler ones, creating unpredictable performance gaps that require human oversight." - Jagged Edge of AI

The “jagged edge” or “jagged frontier of AI” is the uneven boundary of current AI capability, where systems are superhuman at some tasks and surprisingly poor at others of seemingly similar difficulty, producing erratic performance that cannot yet replace human judgement and requires careful oversight.4,7

At this jagged edge, AI models can:

  • Excel at tasks like reading, coding, structured writing, or exam-style reasoning, often matching or exceeding expert-level performance.1,2,7
  • Fail unpredictably on tasks that appear simpler to humans, especially when they demand robust memory, context tracking, strict rule-following, or real-world common sense.1,2,4

This mismatch has several defining characteristics:

  • Jagged capability profile
    AI capability does not rise smoothly; instead, it forms a “wall with towers and recesses” – very strong in some directions (e.g. maths, classification, text generation), very weak in others (e.g. persistent memory, reliable adherence to constraints, nuanced social judgement).2,3,4
    Researchers label this pattern the “jagged technological frontier”: some tasks are easily done by AI, while others, though seemingly similar in difficulty, lie outside its capability.4,7

  • Sensitivity to small changes
    Performance can swing dramatically with minor changes in task phrasing, constraints, or context.4
    A model that handles one prompt flawlessly may fail when the instructions are reordered or slightly reworded, which makes behaviour hard to predict without systematic testing.

  • Bottlenecks and “reverse salients”
    The jagged shape creates bottlenecks: single weak spots (such as memory or long-horizon planning) that limit what AI can reliably automate, even when its raw intelligence looks impressive.2
    When labs solve one such bottleneck – a reverse salient – overall capability can suddenly lurch forward, reshaping the frontier while leaving new jagged edges elsewhere.2

  • Implications for work and organisation design
    Because capability is jagged, AI tends not to uniformly improve or replace jobs; instead it supercharges some tasks and underperforms on others, even within the same role.6,7
    Field experiments with consultants show large productivity and quality gains on tasks inside the frontier, but far less help – or even harm – on tasks outside it.7
    This means roles evolve towards managing and orchestrating AI across these edges: humans handle judgement, context, and exception cases, while AI accelerates pattern-heavy, structured work.2,4,6

  • Need for human oversight and “AI literacy”
    Because the frontier is jagged and shifting, users must continuously probe and map where AI is trustworthy and where it is brittle.4,8
    Effective use therefore requires AI literacy: knowing when to delegate, when to double-check, and how to structure workflows so that human review covers the weak edges while AI handles its “sweet spot” tasks.4,6,8

In strategic and governance terms, the jagged edge of AI is the moving boundary where:

  • AI is powerful enough to transform tasks and workflows,
  • but uneven and unpredictable enough that unqualified automation is risky,
  • creating a premium on hybrid human–AI systems, robust guardrails, and continuous testing.1,2,4

Strategy theorist: Ethan Mollick and the “Jagged Frontier”

The strategist most closely associated with the jagged edge/frontier of AI in practice and management thinking is Ethan Mollick, whose work has been pivotal in defining how organisations should navigate this uneven capability landscape.2,3,4,7

Relationship to the concept

  • The phrase “jagged technological frontier” originates in a field experiment by Dell’Acqua, Mollick, Ransbotham and colleagues, which analysed how generative AI affects the work of professional consultants.4,7
  • In that paper, they showed empirically that AI dramatically boosts performance on some realistic tasks while offering little benefit or even degrading performance on others, despite similar apparent difficulty – and they coined the term to capture that boundary.7
  • Mollick then popularised and extended the idea in widely read essays such as “Centaurs and Cyborgs on the Jagged Frontier” and later pieces on the shape of AI, jaggedness, bottlenecks, and salients, bringing the concept into mainstream management and strategy discourse.2,3,4

In his writing and teaching, Mollick uses the “jagged frontier” to:

  • Argue that jobs are not simply automated away; instead, they are recomposed into tasks that AI does, tasks that humans retain, and tasks where human–AI collaboration is superior.2,3
  • Introduce the metaphors of “centaurs” (humans and AI dividing tasks) and “cyborgs” (tightly integrated human–AI workflows) as strategies for operating on this frontier.3
  • Emphasise that the jagged shape creates both opportunities (rapid acceleration of some activities) and constraints (persistent need for human oversight and design), which leaders must explicitly map and manage.2,3,4

In this sense, Mollick functions as a strategy theorist of the jagged edge: he connects the underlying technical phenomenon (uneven capability) with organisational design, skills, and competitive advantage, offering a practical framework for firms deciding where and how to deploy AI.

Biography and relevance to AI strategy

  • Academic role
    Ethan Mollick is an Associate Professor of Management at the Wharton School of the University of Pennsylvania, specialising in entrepreneurship, innovation, and the impact of new technologies on work and organisations.7
    His early research focused on start-ups, crowdfunding and innovation processes, before shifting towards generative AI and its effects on knowledge work, where he now runs some of the most cited field experiments.

  • Research on AI and work
    Mollick has co-authored multiple studies examining how generative AI changes productivity, quality and inequality in real jobs.
    In the “Navigating the Jagged Technological Frontier” experiment, his team placed consultants in realistic tasks with and without AI and showed that:

  • For tasks inside AI’s frontier, consultants using AI were more productive (12.2% more tasks, 25.1% faster) and produced over 40% higher quality output.7

  • For tasks outside the frontier, the benefits were weaker or absent, highlighting the risk of over-reliance where AI is brittle.7
    This empirical demonstration is central to the modern understanding of the jagged edge as a strategic boundary rather than a purely technical curiosity.

  • Public intellectual and practitioner bridge
    Through his “One Useful Thing” publication and executive teaching, Mollick translates these findings into actionable guidance for leaders, including:

  • How to design workflows that align with AI’s jagged profile,

  • How to structure human–AI collaboration modes, and

  • How to build organisational capabilities (training, policies, experimentation) to keep pace as the frontier moves.2,3,4

  • Strategic perspective
    Mollick frames the jagged frontier as a continuously shifting strategic landscape:

  • Companies that map and exploit the protruding “towers” of AI strength can gain significant productivity and innovation advantages.

  • Those that ignore or misread the “recesses” – the weak edges – risk compliance failures, reputational harm, or operational fragility when they automate tasks that still require human judgement.2,4,7

For organisations grappling with the jagged edge of AI, Mollick’s work offers a coherent strategy lens: treat AI not as a monolithic capability but as a jagged, moving frontier; build hybrid systems that respect its limits; and invest in human skills and structures that can adapt as that edge advances and reshapes.

References

1. https://www.salesforce.com/blog/jagged-intelligence/

2. https://www.oneusefulthing.org/p/the-shape-of-ai-jaggedness-bottlenecks

3. https://www.oneusefulthing.org/p/centaurs-and-cyborgs-on-the-jagged

4. https://libguides.okanagan.bc.ca/c.php?g=743006&p=5383248

5. https://edrm.net/2024/10/navigating-the-ai-frontier-balancing-breakthroughs-and-blind-spots/

6. https://drphilippahardman.substack.com/p/defining-and-navigating-the-jagged

7. https://www.hbs.edu/faculty/Pages/item.aspx?num=64700

8. https://daedalusfutures.com/latest/f/life-at-the-jagged-edge-of-ai

"The "jagged edge of AI" refers to the inconsistent and uneven nature of current artificial intelligence, where models excel at some complex tasks (like writing code) but fail surprisingly at simpler ones, creating unpredictable performance gaps that require human oversight." - Term: Jagged Edge of AI

‌

‌

Quote: Aesop - Greek fabulist

"No act of kindness, no matter how small, is ever wasted." - Aesop - Greek fabulist

The line is commonly attributed to Aesop, the semi-legendary Greek teller of fables whose brief animal stories have shaped moral thinking for over two millennia.1 The quotation crystallises a theme that runs through his work: that modest gestures, offered without calculation, can alter destinies - and that significance is rarely proportional to size.

The phrase is most often linked to one of his best-known fables, The Lion and the Mouse. In the story, a mighty lion captures a frightened mouse who has unwittingly disturbed his sleep. Amused by the tiny creature's pleas for mercy, the lion chooses to spare her rather than eat her. Later, the lion himself is caught in a hunter's net. Hearing his roars, the mouse remembers the earlier kindness, gnaws through the ropes, and frees him. The moral traditionally drawn has several layers: power should not despise weakness; help may come from unexpected quarters; and, above all, what looks like an insignificant kindness can return at a moment when everything depends upon it.1,3

Like many lines associated with Aesop, the wording we use today is a smooth, modern paraphrase rather than a verbatim translation from ancient Greek. The fables were transmitted orally and then written down, edited and re-edited over centuries, so exact phrasing shifts with language and era. What endures is the moral insight: that kindness carries a durable value of its own. Even when it is not repaid by the original recipient, it may ripple outward, change someone else's course, or simply refine the character of the giver.

Aesop: life, legend and the making of a moralist

Almost everything about Aesop is enveloped in a mixture of scattered references, later biographies and literary tradition. Ancient sources generally agree on a few core points. He is said to have lived in the 6th century BC, during the Archaic period of Greek history, and to have been a slave who became famous for his storytelling.3 Accounts place his origins variously in Phrygia, Thrace, Samos or Lydia. The poet Herodotus mentions an Aesop in passing, and later authors, especially the semi-fictional Life of Aesop, embroider his biography with colourful episodes: his wit in outmanoeuvring masters, his travels to the courts of rulers, and his sharp, satirical use of fables to criticise hypocrisy and injustice.

The precise historical Aesop is hard to reconstruct; scholars widely believe that many of the fables now grouped under his name are the work of multiple anonymous fabulists, collected and attributed to him over time. Yet the persona of Aesop - a socially marginal figure whose insight cuts through pretension - is part of the power of the tradition. The idea that a man of low status, possibly foreign and enslaved, could offer enduring ethical guidance suited stories in which small animals correct great beasts and apparent weakness turns into moral authority.

Aesop's fables are typically brief, often no more than a paragraph, and end with a concise moral: "slow and steady wins the race", "look before you leap", "better safe than sorry". The dramatis personae are usually animals with human traits: proud lions, cunning foxes, diligent ants, foolish crows. The form allows hard truths about pride, greed, cruelty and folly to be voiced at a safe distance. A king may not welcome a direct rebuke, but he can chuckle at the misfortunes of a boastful crow and still absorb the point.

Within this tradition, the kindness of the lion in sparing the mouse is striking because it seems gratuitous. There is no expectation of return; indeed the lion laughs at the idea that such a puny creature could ever repay him. The reversal, when the mouse becomes the saviour, underlines a countercultural message in hierarchic societies: do not dismiss the small. Value may lie where power does not.

Kindness in the Aesopic imagination

The fable behind the quote is not unique in celebrating generosity, mercy and reciprocity. Across the Aesopic corpus, we find recurring patterns:

  • The reversal of expectations: small animals outwit or rescue large ones; the poor prove more hospitable than the rich; the apparently foolish reveal deeper wisdom. This elevates kindness from a sentimental theme to a quiet subversion of conventional rankings.
  • Pragmatic ethics: kindness is rarely abstract. It appears in concrete actions - sharing food, offering protection, warning of danger, forgiving offences - often framed as both morally right and, in the long run, prudent.
  • Moral memory: characters remember both kindnesses and wrongs. The mouse's recollection of the lion's mercy is central to the story's impact. The fables assume that moral actions plant seeds in the social world, germinating later in unpredictable ways.

In this light, "No act of kindness, no matter how small, is ever wasted" becomes less a comforting phrase and more a concise reading of how a moral economy operates. Some acts of generosity will be repaid directly, others indirectly; some may shape the character of the giver rather than the fate of the receiver. But none is meaningless. Each contributes to a network of obligations, examples and stories that make cooperation and trust more thinkable.

From oral tale to ethical tradition

Aesop's fables spread widely in the classical world, used by philosophers, rhetoricians and educators. By the time of the Roman Empire, authors such as Phaedrus and later Babrius were adapting and versifying the tales into Latin and Greek. In late antiquity and the Middle Ages, Christian writers folded them into sermons and exempla, appreciating their ability to cloak serious moral lessons in accessible narratives.

With the advent of print in Europe, Aesopic material was gathered into influential collections. Erasmus of Rotterdam recommended the fables for schooling, seeing in them a resource for both grammar and virtue. In the 17th century, the French poet Jean de La Fontaine reworked many Aesopic plots into elegant French verse, overlaying classical structures with the social observation and courtly wit of Louis XIV's France. La Fontaine's Fables became a key text in French culture, and their portrayals of vanity, power and injustice often retain the Aesopic device of seemingly small characters revealing truths ignored by the mighty.

In England, translators and moralists produced their own Aesop editions, frequently aimed at children. Here, the line between folklore and formal moral education blurred: nursery reading, religious instruction and civic virtues converged around stock morals like the one encapsulated in this quote on kindness. Over time, specific phrases, once simple glosses of a story's lesson, took on an independent life as freestanding aphorisms.

Kindness, reciprocity and moral psychology

Aesop wrote long before the emergence of modern philosophy, social science or psychology, yet his intuition that small kind acts are not wasted finds echoes in later theoretical work on reciprocity, altruism and moral development. Several strands are particularly relevant.

Hobbes, Hume and the sentiment of benevolence

In the 17th century, Thomas Hobbes portrayed human beings as driven largely by self-interest and fear, needing strong authority to keep mutual aggression in check. On this view, kindness risks looking naive unless grounded in prudent calculation. However, even Hobbes conceded that humans seek reputation and that cooperative behaviour can be instrumentally rational; there is room here for the idea that acts of generosity, even small ones, help build the trust on which stable society depends.

By contrast, 18th-century moral sentimentalists, especially David Hume and Adam Smith, argued that we are naturally equipped with feelings of sympathy or fellow-feeling. Hume emphasised that we take pleasure in the happiness of others and discomfort in their suffering, while Smith's notion of the "impartial spectator" highlights our capacity to imagine how our conduct appears to an objective observer. In such frameworks, a small kindness is far from wasted: it responds to and reinforces dispositions at the heart of our moral life. It also trains our own sensibilities, making us more attuned to the needs and perspectives of others.

Kant and the duty of beneficence

Immanuel Kant, writing in the late 18th century, approached morality through duty rather than sentiment. For him, there is a categorical imperative to treat others never merely as means but always also as ends. From this flows a duty of beneficence: to further the ends of others where one can. In Kantian terms, a small act of kindness honours the rational agency and dignity of the other person. Its worth does not depend on its consequences; the moral law is fulfilled even if the act appears to yield no tangible return. Here, too, "no act of kindness is wasted" because its ethical value lies in the alignment of the agent's will with duty, not in the size of the outcome.

Utilitarianism and the calculus of small benefits

19th-century utilitarians such as Jeremy Bentham and John Stuart Mill evaluated actions in terms of their contributions to overall happiness. From a utilitarian angle, small acts of kindness matter precisely because happiness and suffering are often composed of many minor experiences. A kind word, a small favour or a moment of consideration can marginally improve someone's well-being; aggregated across societies and over time, such increments are far from trivial.

Later utilitarians have explored how "low-cost, high-benefit" acts - such as sharing information, making introductions, or providing minor assistance - form the micro-foundations of cooperative systems. What looks, from the actor's perspective, like an almost costless kindness can, in the right context, unlock disproportionately large positive effects.

Game theory, reciprocity and indirect returns

In the 20th century, game theory and the study of cooperation added formal structure to Aesop's intuition. Work by theorists such as Robert Axelrod on repeated prisoner's dilemma games showed that strategies embodying conditional cooperation - being kind or cooperative initially, and reciprocating others' behaviour thereafter - can be highly effective in sustaining stable, mutually beneficial relationships.

Experiments and models of indirect reciprocity suggest that helping someone can improve one's reputation with third parties, who may in turn be more inclined to help the original benefactor. In this sense, an apparently "wasted" act - say, assisting a stranger one will never meet again - can still generate returns via social perception and norms. The mouse's rescue of the lion is a vivid narrative analogue of these abstract dynamics.

Evolutionary perspectives on altruism

Biologists and evolutionary theorists, including figures such as William Hamilton and later Robert Trivers, explored how cooperation and altruistic behaviour could evolve. Concepts like kin selection, reciprocal altruism and group selection provide mechanisms by which helping behaviour can be favoured by natural selection, especially when benefits to recipients (discounted by relatedness or likelihood of reciprocation) exceed costs to givers.

In this framework, small acts of kindness can be seen as low-cost signals of cooperative intent, fostering trust and potentially triggering reciprocal help. The lion and the mouse, of course, are anthropomorphic characters rather than biological models, but the story dramatises a pattern: generosity can create allies out of potential nonentities.

Moral development and the education of kindness

In the 20th century, psychologists such as Jean Piaget and Lawrence Kohlberg studied how children's moral reasoning matures, while later researchers in developmental psychology examined the roots of empathy and prosocial behaviour. Experiments with very young children show early forms of spontaneous helping and sharing; socialisation then shapes how these impulses are expressed and regulated.

Narratives like Aesop's fables play an important role here. They provide simplified contexts in which consequences of actions are clear and moral stakes are stark. A child hearing the tale of the lion and the mouse is invited to see mercy not as weakness but as a risk that pays off, and to understand that size and status do not determine worth. The tag-line about no kindness being wasted condenses that lesson into a maxim that can be carried into everyday encounters.

Kindness in modern ethics and social thought

Recent moral philosophy has, in some strands, given renewed attention to the character of the moral agent rather than just rules or consequences. Virtue ethics, drawing on Aristotle and revived by thinkers such as Elizabeth Anscombe and Philippa Foot, considers traits like generosity, compassion and kindness as central excellences of personhood. On this view, individual kind acts are not isolated events but expressions of a stable disposition, cultivated through habit.

At the same time, care ethics, developed notably by Carol Gilligan and Nel Noddings, highlights the moral centrality of attending to particular others in their vulnerability and dependence. The spotlight falls on the often invisible labour of caring, listening and supporting - many of the very small acts that Aesop's maxim invites us to see as meaningful.

Social theorists and economists examining social capital also pick up related themes. Trust, norms of reciprocity and informal networks of help underpin effective institutions and resilient communities. A culture in which people habitually extend small kindnesses - returning lost items, offering directions, making allowances for others' mistakes - tends to enjoy higher levels of trust and lower transaction costs. From this macro perspective, each micro kindness again appears far from wasted; it marginally strengthens the fabric on which shared life depends.

A timeless lens on everyday conduct

Placed in its full context, Aesop's line is more than a gentle encouragement. It is the distilled wisdom of a tradition that has observed, with unsentimental clarity, how societies actually work. Power fluctuates; fortunes reverse; the weak become strong and the strong, weak. Status blinds; pride isolates. In such a world, the small, uncalculated kindness - offered to those who cannot compel it and may never repay it - turns out to be a surprisingly robust investment.

The lion did not spare the mouse because a cost-benefit analysis predicted future rescue. He did so as an expression of what it means to be magnanimous. The mouse did not free the lion because she had signed a contract; she responded out of gratitude and loyalty. The story implies that such acts are never wasted because they participate in a deeper moral order, one in which character, memory and relationship weigh more than immediate gain.

Aesop's genius lay in noticing that these truths can be taught most effectively not through abstract argument but through stories that lodge in the imagination. The aphorism "No act of kindness, no matter how small, is ever wasted" is a modern summation of that lesson - a reminder that, in a world often preoccupied with scale and spectacle, the quiet decision to be kind retains a significance that far exceeds its size.

References

1. https://philosiblog.com/2014/02/28/no-act-of-kindness-no-matter-how-small-is-ever-wasted/

2. https://www.passiton.com/inspirational-quotes/6666-no-act-of-kindness-no-matter-how-small-is

3. https://www.quotationspage.com/quote/24014.html

4. https://www.randomactsofkindness.org/kindness-quotes/127-no-act-of-kindness-no

5. https://friendsofwords.com/2021/07/19/no-act-of-kindness-no-matter-how-small-is-ever-wasted-aesop-meaning/

"No act of kindness, no matter how small, is ever wasted." - Quote: Aesop

‌

‌

Quote: Kristalina Georgieva - Managing Director, IMF

"What is being eliminated [by AI] are often tasks done by new entries into the labor force - young people. Conversely, people with higher skills get better pay, spend more locally, and that ironically increases demand for low-skill jobs. This is bad news for recent ... graduates." - Kristalina Georgieva - Managing Director, IMF

Kristalina Georgieva, Managing Director of the International Monetary Fund (IMF), delivered this stark observation during a World Economic Forum Town Hall in Davos on 23 January 2026, amid discussions on 'Dilemmas around Growth'. Speaking as AI's rapid adoption accelerates, she highlighted a dual dynamic: the elimination of routine entry-level tasks traditionally filled by young graduates, coupled with productivity gains for higher-skilled workers that paradoxically boost demand for low-skill service roles.1,2,5

Context of the Quote

Georgieva's remarks form part of the IMF's latest research, which estimates that AI will impact 40% of global jobs and 60% in advanced economies through enhancement, elimination, or transformation.1,3 She described AI as a 'tsunami hitting the labour market', emphasising its immediate effects: one in ten jobs in advanced economies already demands new skills, often IT-related, creating wage pressures on the middle class while entry-level positions vanish.1,2,5 This 'accordion of opportunities' sees high-skill workers earning more, spending locally, and sustaining low-skill jobs like hospitality, but leaves recent graduates struggling to enter the workforce.5

Backstory on Kristalina Georgieva

Born in 1953 in Sofia, Bulgaria, Kristalina Georgieva rose from communist-era academia to global economic leadership. She earned a PhD in economic modelling and worked as an economist before Bulgaria's democratic transition. Joining the World Bank in 1993, she climbed to roles including Chief Economist for Europe and Central Asia, then Commissioner for International Cooperation, Humanitarian Aid, and Crisis Response at the European Commission (2010-2014). Appointed IMF Managing Director in 2019, she navigated the COVID-19 crisis, steering over USD 1 trillion in lending and advocating fiscal resilience. Georgieva's tenure has focused on inequality, climate finance, and digital transformation, making her a authoritative voice on AI's socioeconomic implications.3,5

Leading Theorists on AI and Labour Markets

The theoretical foundations of Georgieva's analysis trace to pioneering economists dissecting technology's job impacts.

  • David Autor: MIT economist whose 'task-based framework' (with Frank Levy) posits jobs as bundles of tasks, some automatable. Autor's research shows AI targets routine cognitive tasks, polarising labour markets by hollowing out middle-skill roles while boosting high- and low-skill demand-a 'polarisation' mirroring Georgieva's entry-level concerns.3
  • Erik Brynjolfsson and Andrew McAfee: MIT scholars and authors of The Second Machine Age, they argue AI enables 'recombinant innovation', automating cognitive work unlike prior mechanisation. Their work warns of 'winner-takes-all' dynamics exacerbating inequality without policy interventions like reskilling, aligning with IMF calls for adaptability training.3
  • Daron Acemoglu: MIT Nobel laureate (2024) who, with Pascual Restrepo, models automation's 'displacement vs productivity effects'. Their framework predicts AI displaces routine tasks but creates complementary roles; however, without incentives for human-AI collaboration, net job losses loom for low-skill youth.5

These theorists underpin IMF models, stressing that AI's net employment effect hinges on policy: Northern Europe's success in 'learning how to learn' exemplifies adaptive education over rigid skills training.5

Broader Implications

Georgieva urges proactive measures-reskilling youth, bolstering social safety nets, and regulating AI for inclusivity-to avert deepened inequality. Emerging markets face steeper skills gaps, risking divergence from advanced economies.1,3,5 Her personal embrace of tools like Microsoft Copilot underscores individual agency, yet systemic reform remains essential for equitable growth.

References

1. https://www.businesstoday.in/wef-2026/story/wef-summit-davos-2026-ai-jobs-workers-middle-class-labour-market-imf-kristalina-georgieva-512774-2026-01-24

2. https://fortune.com/2026/01/23/imf-chief-warns-ai-tsunami-entry-level-jobs-gen-z-middle-class/

3. https://globaladvisors.biz/2026/01/23/quote-kristalina-georgieva-managing-director-imf/

4. https://www.youtube.com/watch?v=4ANV7yuaTuA

5. https://www.weforum.org/podcasts/meet-the-leader/episodes/ai-skills-global-economy-imf-kristalina-georgieva/

"What is being eliminated [by AI] are often tasks done by new entries into the labor force - young people. Conversely, people with higher skills get better pay, spend more locally, and that ironically increases demand for low-skill jobs. This is bad news for recent ... graduates." - Quote: Kristalina Georgieva - Managing Director, IMF

‌

‌

Quote: Kristalina Georgieva - Managing Director, IMF

"Is the labour market ready [for AI] ? The honest answer is no. Our study shows that already in advanced economies, one in ten jobs require new skills." - Kristalina Georgieva - Managing Director, IMF

Kristalina Georgieva, Managing Director of the International Monetary Fund (IMF), delivered this stark assessment during a World Economic Forum town hall in Davos in January 2026, amid discussions on growth dilemmas in an AI-driven era1,3,4. Her words underscore the IMF's latest research revealing that artificial intelligence is already reshaping labour markets, with immediate implications for employment and skills development worldwide5.

Who is Kristalina Georgieva?

Born in 1953 in Bulgaria, Kristalina Georgieva rose through the ranks of international finance with a career marked by economic expertise and crisis leadership. Holding a PhD in economic modelling from Sofia University, she began at the World Bank in 1993, eventually becoming Chief Executive Officer of its Science and Technology division. She served as European Commission Vice-President for Budget and Human Resources from 2014 to 2016, and as CEO of the World Bank Group from 2017. Appointed IMF Managing Director in 2019, she navigated the institution through the COVID-19 pandemic, the global inflation surge, and geopolitical shocks, advocating for fiscal resilience and inclusive growth3,5. Georgieva's tenure has emphasised data-driven policy, particularly on technology's societal impacts, making her a pivotal voice on AI's economic ramifications1.

The Context of the Quote

Spoken at the WEF 2026 Town Hall on 'Dilemmas around Growth', the quote reflects IMF analysis showing AI affecting 40% of global jobs-enhanced, eliminated, or transformed-with 60% in advanced economies3,4. Georgieva highlighted that in advanced economies, one in ten jobs already requires new skills, often IT-related, creating supply shortages5. She likened AI's impact on entry-level roles to a 'tsunami', warning of heightened risks for young workers and graduates as routine tasks vanish1,2. Despite productivity gains-potentially boosting global growth by 0.1% to 0.8%-uneven distribution exacerbates inequality, with low-income countries facing only 20-26% exposure yet lacking adaptation infrastructure4.

Leading Theorists on AI and Labour Markets

The IMF's task-based framework draws from foundational work by economists like David Autor, who pioneered the 'task approach' in labour economics. Autor's research, with co-authors like Frank Levy, posits that jobs consist of discrete tasks, some automatable (routine cognitive or manual) and others not (non-routine creative or interpersonal). AI, unlike prior automation targeting physical routines, encroaches on cognitive tasks, polarising labour markets by hollowing out middle-skill roles3.

Erik Brynjolfsson and Andrew McAfee, MIT scholars and authors of Race Against the Machine (2011) and The Second Machine Age (2014), argue AI heralds a 'qualitative shift', automating high-skill analytical work previously safe from machines. Their studies predict widened inequality without intervention, as gains accrue to capital owners and superstars while displacing median workers. Recent IMF-aligned research echoes this, noting AI's dual potential for productivity surges and job reshaping3,5.

Other influencers include Carl Benedikt Frey and Michael Osborne, whose 2013 Oxford study estimated 47% of US jobs at high automation risk, catalysing global discourse. Their work influenced IMF models, emphasising reskilling urgency3. Georgieva advocates policies inspired by these theorists: massive investment in adaptable skills-'learning how to learn'-as seen in Nordic models like Finland and Sweden, where flexibility buffers disruption5. Data shows a 1% rise in new skills correlates with 1.3% overall employment growth, countering fears of net job loss5.

Broader Implications

Georgieva's warning arrives amid economic fragmentation-trade tensions, US-China rivalry, and sluggish productivity (global growth at 3.3% versus pre-pandemic 3.8%)5. AI could reverse this if harnessed equitably, but demands proactive measures: reskilling for vulnerable youth, social protections, and regulatory frameworks to distribute gains. Advanced economies must lead, while supporting emerging markets to avoid an 'accordion of opportunities'-expanding in the rich world, contracting elsewhere4. Her call to action is clear: policymakers and businesses must use IMF insights to prepare, not react.

References

1. https://fortune.com/2026/01/23/imf-chief-warns-ai-tsunami-entry-level-jobs-gen-z-middle-class/

2. https://timesofindia.indiatimes.com/education/careers/news/ai-is-hitting-entry-level-jobs-like-a-tsunami-imf-chief-kristalina-georgieva-urges-students-to-prepare-for-change/articleshow/127381917.cms

3. https://globaladvisors.biz/2026/01/23/quote-kristalina-georgieva-managing-director-imf/

4. https://www.weforum.org/stories/2026/01/live-from-davos-2026-what-to-know-on-day-2/

5. https://www.weforum.org/podcasts/meet-the-leader/episodes/ai-skills-global-economy-imf-kristalina-georgieva/

"Is the labour market ready [for AI] ? The honest answer is no. Our study shows that already in advanced economies, one in ten jobs require new skills." - Quote: Kristalina Georgieva - Managing Director, IMF

‌

‌
Share this on FacebookShare this on LinkedinShare this on YoutubeShare this on InstagramShare this on TwitterWhatsapp
You have received this email because you have subscribed to Global Advisors | Quantified Strategy Consulting as . If you no longer wish to receive emails please unsubscribe.
webversion - unsubscribe - update profile
© 2026 Global Advisors | Quantified Strategy Consulting, All rights reserved.
‌
‌