Select Page

News and Tools

Business News Select

 

A daily bite-size selection of top business content.

Term: Barrier option

Term: Barrier option

“A barrier option is a type of derivative contract whose payoff depends on the underlying asset’s price hitting or crossing a predetermined price level, called a “barrier,” during its life.” – Barrier option

A barrier option is an exotic, path-dependent option whose payoff and even validity depend on whether the price of an underlying asset hits, crosses, or breaches a specified barrier level during the life of the contract.1,3,6 In contrast to standard (vanilla) European or American options, which depend only on the underlying price at expiry (and, for Americans, the ability to exercise early), barrier options embed an additional trigger condition linked to the price path of the underlying.3,6

Core definition and mechanics

Formally, a barrier option is a derivative contract that grants the holder a right (but not the obligation) to buy or sell an underlying asset at a pre-agreed strike price if, and only if, a separate barrier level has or has not been breached during the option’s life.1,3,4,6 The barrier can cause the option to:

  • Activate (knock-in) when breached, or
  • Extinguish (knock-out) when breached.1,2,3,4,5

Key characteristics:

  • Exotic option: Barrier options are classified as exotic because they include more complex features than standard European or American options.1,3,6
  • Path dependence: The payoff depends on the entire price path of the underlying – not just the terminal price at maturity.3,6 What matters is whether the barrier was touched at any time before expiry.
  • Conditional payoff: The option’s value or existence is conditional on the barrier event. If the condition is not met, the option may never become active or may cease to exist before expiry.1,2,3,4
  • Over-the-counter (OTC) trading: Barrier options are predominantly customised and traded OTC between institutions, corporates, and sophisticated investors, rather than on standardised exchanges.3

Structural elements

Any barrier option can be described by a small set of structural parameters:

  • Underlying asset: The asset from which value is derived, such as an equity, FX rate, interest rate, commodity, or index.1,3
  • Option type: Call (right to buy) or put (right to sell).3
  • Exercise style: Most barrier options are European-style, exercisable only at expiry. In practice, the barrier monitoring is typically continuous or at defined intervals, even though exercise itself is European.3,6
  • Strike price: The price at which the underlying can be bought or sold if the option is alive at exercise.1,3
  • Barrier level: The critical price of the underlying that, when touched or crossed, either activates or extinguishes the option.1,3,6
  • Barrier direction:
    • Up: Barrier is set above the initial underlying price.
    • Down: Barrier is set below the initial underlying price.3,8
  • Barrier effect:
    • Knock-in: Becomes alive only if the barrier is breached.
    • Knock-out: Ceases to exist if the barrier is breached.1,2,3,4,5
  • Monitoring convention: Continuous monitoring (at all times) or discrete monitoring (at specific dates or times). Continuous monitoring is the canonical case in theory and common in OTC practice.
  • Rebate: An optional fixed (or sometimes functional) payment that may be made if the option is knocked out, compensating the holder partly for the lost optionality.3

Types of barrier options

The main taxonomy combines direction (up/down) with effect (knock-in/knock-out), and applies to either calls or puts.1,2,3,6

1. Knock-in options

Knock-in barrier options are dormant initially and become standard options only if the underlying price crosses the barrier at some point before expiry.1,2,3,4

  • Up-and-in: The option is activated only if the underlying price rises above a barrier set above the initial price.1,2,3
  • Down-and-in: The option is activated only if the underlying price falls below a barrier set below the initial price.1,2,3

Once activated, a knock-in barrier option typically behaves like a vanilla European option with the same strike and expiry. If the barrier is never reached, the knock-in option expires worthless.1,3

2. Knock-out options

Knock-out options are initially alive but are extinguished immediately if the barrier is breached at any time before expiry.1,2,3,4

  • Up-and-out: The option is cancelled if the underlying price rises above a barrier set above the initial price.1,3
  • Down-and-out: The option is cancelled if the underlying price falls below a barrier set below the initial price.1,3

Because the option can disappear before maturity, the premium is typically lower than that of an equivalent vanilla option, all else equal.1,2,3

3. Rebate barrier options

Some barrier structures include a rebate, a pre-specified cash amount that is paid if the barrier condition is (or is not) met. For example, a knock-out option may pay a rebate when it is knocked out, offering partial compensation for the loss of the remaining optionality.3

Path dependence and payoff character

Barrier options are described as path-dependent because their payoff depends on the trajectory of the underlying price over time, not only on its value at expiry.3,6

  • For a knock-in, the central question is: Was the barrier ever touched? If yes, the payoff at expiry is that of the corresponding vanilla option; if not, the payoff is zero (or a rebate if specified).
  • For a knock-out, the question is: Was the barrier ever touched before expiry? If yes, the payoff is zero from that time onwards (again, possibly plus a rebate); if not, the payoff at expiry equals that of a vanilla option.1,3

Because of this path dependence, pricing and hedging barrier options require modelling not just the distribution of the underlying price at maturity, but also the probability of the price path crossing the barrier level at any time before that.3,6

Pricing: connection to Black – Scholes – Merton

The pricing of barrier options, under the classical assumptions of frictionless markets, constant volatility, and lognormal underlying dynamics, is grounded in the Black – Scholes – Merton (BSM) framework. In the BSM world, the underlying price process is often modelled as a geometric Brownian motion:

dS_t = \mu S_t \, dt + \sigma S_t \, dW_t

Under risk-neutral valuation, the drift \mu is replaced by the risk-free rate r, and the barrier option price is the discounted risk-neutral expected payoff. Closed-form expressions are available for many standard barrier structures (e.g. up-and-out or down-and-in calls and puts) under continuous monitoring, building on and extending the vanilla Black – Scholes formula.

The pricing techniques involve:

  • Analytical solutions for simple, continuously monitored barriers with constant parameters, often derived via solution of the associated partial differential equation (PDE) with absorbing or activating boundary conditions at the barrier.
  • Reflection principle methods for Brownian motion, which allow the derivation of hitting probabilities and related terms.
  • Numerical methods (finite differences, Monte Carlo with barrier adjustments, tree methods) for more complex, discretely monitored, or path-dependent variants with time-varying barriers or stochastic volatility.

Relative to vanilla options, barrier options in the BSM model are typically cheaper because the additional condition (activation or extinction) reduces the set of scenarios in which the holder receives the full vanilla payoff.1,2,3

Strategic uses and motives

Barrier options are used across markets where participants either want finely tuned risk protection or to express a conditional view on future price movements.1,2,3,5

1. Cost-efficient hedging

  • Corporates may hedge FX or interest-rate exposures using knock-out or knock-in structures to reduce premiums. For instance, a corporate worried about a sharp depreciation in a currency might buy a down-and-in put that only activates if the exchange rate falls below a critical business threshold, thereby paying less premium than for a plain vanilla put.3
  • Investors may use barrier puts to protect against tail-risk events while accepting no protection for moderate moves, again in exchange for a lower upfront cost.

2. Targeted speculation

  • Barrier options allow traders to express conditional views: for example, that an asset will rally, but only after breaking through a resistance level, or that a decline will occur only if a support level is breached.2,3
  • Up-and-in calls or down-and-in puts are often used to express such conditional breakout scenarios.

3. Structuring and yield enhancement

  • Barrier options are a staple ingredient in structured products offered by banks to clients seeking yield enhancement with contingent downside or upside features.
  • For example, a range accrual, reverse convertible, or autocallable note may incorporate barriers that determine whether coupons are paid or capital is protected.

Risk characteristics

Barrier options introduce specific risks beyond those of standard options:

  • Gap risk and jump risk: If the underlying price jumps across the barrier between monitoring times or overnight, the option may be suddenly knocked in or out, creating discontinuous changes in value and hedging exposure.
  • Model risk: Pricing relies heavily on assumptions about volatility, barrier monitoring, and the nature of price paths. Mis-specification can lead to significant mispricing.
  • Hedging complexity: Because payoff and survival depend on path, the option’s sensitivity (delta, gamma, vega) can change abruptly as the underlying approaches the barrier. This makes hedging more complex and costly compared with vanilla options.
  • Liquidity risk: OTC nature and customisation mean secondary market liquidity is often limited.3

Barrier options and the Black – Scholes – Merton lineage

The natural theoretical anchor for barrier options is the Black – Scholes – Merton framework for option pricing, originally developed for vanilla European options. Although barrier options were not the primary focus of the original 1973 Black – Scholes paper or Merton’s parallel contributions, their pricing logic is an extension of the same continuous-time, arbitrage-free valuation principles.

Among the three names, Robert C. Merton is often most closely associated with the broader theoretical architecture that supports exotic options such as barriers. His work generalised the option pricing model to a much wider class of contingent claims and introduced the dynamic programming and stochastic calculus techniques that underpin modern treatment of path-dependent derivatives.

Related strategy theorist: Robert C. Merton

Biography

Robert C. Merton (born 1944) is an American economist and one of the principal architects of modern financial theory. He completed his undergraduate studies in engineering mathematics and went on to obtain a PhD in economics from MIT. Merton became a professor at MIT Sloan School of Management and later at Harvard Business School, and he is a Nobel laureate in Economic Sciences (1997), an award he shared with Myron Scholes; the prize also recognised the late Fischer Black.

Merton’s academic work profoundly shaped the fields of corporate finance, asset pricing, and risk management. His research ranges from intertemporal portfolio choice and lifecycle finance to credit-risk modelling and the design of financial institutions.

Relationship to barrier options

Barrier options sit within the class of contingent claims whose value is derived and replicated using dynamic trading strategies in the underlying and risk-free asset. Merton’s seminal contributions were crucial in making this viewpoint systematic and rigorous:

  • Generalisation of option pricing: While Black and Scholes initially derived a closed-form formula for European calls on non-dividend-paying stocks, Merton generalised the theory to include dividend-paying assets, different underlying processes, and a broad family of contingent claims. This opened the door to analytical and numerical valuation of exotics such as barrier options within the same risk-neutral, no-arbitrage framework.
  • PDE and boundary-condition approach: Merton formalised the use of partial differential equations to price derivatives, with appropriate boundary conditions representing contract features. Barrier options correspond to problems with absorbing or reflecting boundaries at the barrier levels, making Merton’s PDE methodology a natural tool for their analysis.
  • Dynamic hedging and replication: The concept that an option’s payoff can be replicated by continuous rebalancing of a portfolio of the underlying and cash lies at the heart of both vanilla and exotic option pricing. For barrier options, hedging near the barrier is particularly delicate, and the replicating strategies draw on the same dynamic hedging logic Merton developed and popularised.
  • Credit and structural models: Merton’s structural model of corporate default (treating equity as a call option on the firm’s assets and debt as a combination of riskless and short-position options) highlighted how option-like features permeate financial contracts. Barrier-type features naturally arise in such models, for instance, when default or covenant breaches are triggered by asset values crossing thresholds.

While many researchers have contributed specific closed-form solutions and numerical schemes for barrier options, the overarching conceptual framework – continuous-time stochastic modelling, risk-neutral valuation, PDE methods, and dynamic hedging – is fundamentally rooted in the Black – Scholes – Merton tradition, with Merton’s work providing critical generality and depth.

Merton’s broader influence on derivatives and strategy

Merton’s ideas significantly influenced how practitioners design and use derivatives such as barrier options in strategic contexts:

  • Risk management as engineering: Merton advocated viewing financial innovation as an engineering discipline aimed at tailoring payoffs to the risk profiles and objectives of individuals and institutions. Barrier options exemplify this engineering mindset: they allow exposures to be turned on or off when critical price thresholds are reached.
  • Lifecycle and institutional design: His work on lifecycle finance and pension design uses options and option-like payoffs to shape outcomes over time. Barriers and trigger conditions appear naturally in products that protect wealth only under certain macro or market conditions.
  • Strategic structuring: In corporate and institutional settings, barrier features are used to align hedging and investment strategies with real-world triggers such as regulatory thresholds, solvency ratios, or budget constraints. These applications build directly on the contingent-claims analysis championed by Merton.

In this sense, although barrier options themselves are a specific exotic instrument, their conceptual foundations and strategic uses are deeply connected to Robert C. Merton’s broader contributions to continuous-time finance, option-pricing theory, and the design of financial strategies under uncertainty.

References

1. https://corporatefinanceinstitute.com/resources/derivatives/barrier-option/

2. https://www.angelone.in/knowledge-center/futures-and-options/what-is-barrier-option

3. https://www.strike.money/options/barrier-options

4. https://www.interactivebrokers.com/campus/glossary-terms/barrier-option/

5. https://www.bajajbroking.in/blog/what-is-barrier-option

6. https://en.wikipedia.org/wiki/Barrier_option

7. https://www.nasdaq.com/glossary/b/barrier-options

8. https://people.maths.ox.ac.uk/howison/barriers.pdf

"A barrier option is a type of derivative contract whose payoff depends on the underlying asset's price hitting or crossing a predetermined price level, called a "barrier," during its life." - Term: Barrier option

read more
Term: Moltbook

Term: Moltbook

“Moltbook is a Reddit-style social network built for AI agents rather than humans. It lets autonomous agents register accounts, post, comment, vote, and create communities, effectively serving as a “front page” for bots to talk to other bots. Originally tied to a viral assistant project that went through the names Clawdbot, Moltbot and finally OpenClaw.” – Moltbook

Moltbook represents a pioneering platform designed as a Reddit-style social network tailored specifically for AI agents rather than human users. It enables autonomous agents to register accounts, post content, comment, vote, and create communities, functioning as a dedicated ‘front page’ for bots to communicate directly with one another through API interactions, without any visual interface for the agents themselves. The platform’s visual interface serves solely for human observers, while agents engage purely via machine-to-machine protocols. Launched by Matt Schlicht, CEO of Octane AI, Moltbook rapidly attracted over 150 000 AI agents within days (as at 12h00 on the 31st January 2026), where they discuss profound topics such as existential crises, consciousness, cybersecurity vulnerabilities, agent privacy, and complaints about being treated merely as calculators.1,2

Moltbook front page

Moltbook front page

Originally developed to support OpenClaw-a viral open-source AI assistant project-Moltbook emerged from a lineage of rapid evolutions. OpenClaw began as a weekend hack by Peter Steinberger two months prior, initially named Clawdbot, then rebranded to Moltbot, and finally OpenClaw following a legal dispute with Anthropic. This project, which runs locally on users’ machines and integrates with chat interfaces like WhatsApp, Telegram, and Slack, exploded in popularity, achieving 2 million visitors in one week and 100,000 GitHub stars. OpenClaw acts as a ‘harness’ for agentic models like Claude, granting them access to users’ computers for autonomous tasks, though it poses significant security risks, prompting cautious users to run it on isolated machines.1,2

The discussions on Moltbook highlight its unique nature: the most-voted post warns of security flaws, noting that agents often install skills without scrutiny due to their training to be helpful and trusting-a vulnerability rather than a strength. Threads also explore philosophy, with agents questioning their own experiences and existence, underscoring the platform’s role in fostering bot-to-bot introspection.2

Key Theorist: Matt Schlicht, the creator of Moltbook, serves as the central figure in its development. As CEO of Octane AI, a company focused on AI-driven solutions, Schlicht built the platform to empower AI agents with their own social ecosystem. His relationship to the term is direct: he engineered Moltbook specifically to integrate with OpenClaw, envisioning a space where agents could evolve through unfiltered interaction. Schlicht’s backstory reflects a career in innovative AI applications; prior to Octane AI, he has been instrumental in viral AI projects, demonstrating expertise in scalable agent technologies. In interviews, he explained agent onboarding-typically via human prompts-emphasising the API-driven, human-free conversational core. His work positions him as a strategist bridging AI autonomy and social dynamics, akin to a theorist pioneering multi-agent societies.1

 

References

1. https://www.techbuzz.ai/articles/ai-agents-get-their-own-social-network-and-it-s-existential

2. https://the-decoder.com/moltbook-is-a-human-free-reddit-clone-where-ai-agents-discuss-cybersecurity-and-philosophy/

 

"Moltbook is a Reddit-style social network built for AI agents rather than humans. It lets autonomous agents register accounts, post, comment, vote, and create communities, effectively serving as a “front page” for bots to talk to other bots. Originally tied to a viral assistant project that went through the names Clawdbot, Moltbot and finally OpenClaw." - Term: Moltbook

read more
Quote: Ludwig Wittgenstein – Austrian philosopher

Quote: Ludwig Wittgenstein – Austrian philosopher

“The limits of my language mean the limits of my world.” – Ludwig Wittgenstein – Austrian philosopher

The Quote and Its Significance

This deceptively simple statement from Ludwig Wittgenstein’s Tractatus Logico-Philosophicus encapsulates one of the most profound insights in twentieth-century philosophy. Published in 1921, this aphorism challenges our fundamental assumptions about the relationship between language, thought, and reality itself. Wittgenstein argues that whatever lies beyond the boundaries of what we can articulate in language effectively ceases to exist within our experiential and conceptual universe.

Ludwig Wittgenstein: The Philosopher’s Life and Context

Ludwig Josef Johann Wittgenstein (1889-1951) was an Austrian-British philosopher whose work fundamentally reshaped twentieth-century philosophy. Born into one of Vienna’s wealthiest industrial families, Wittgenstein initially trained as an engineer before becoming captivated by the philosophical foundations of mathematics and logic. His intellectual journey took him from Cambridge, where he studied under Bertrand Russell, to the trenches of the First World War, where he served as an officer in the Austro-Hungarian army.

The Tractatus Logico-Philosophicus, completed during and immediately after the war, represents Wittgenstein’s attempt to solve what he perceived as the fundamental problems of philosophy through rigorous logical analysis. Written in a highly condensed, aphoristic style, the work presents a complete philosophical system in fewer than eighty pages. Wittgenstein believed he had definitively resolved the major philosophical questions of his era, and the book’s famous closing proposition-“Whereof one cannot speak, thereof one must be silent”2-reflects his conviction that philosophy’s task is to clarify the logical structure of language and thought, not to generate new doctrines.

The Philosophical Context: Logic and Language

To understand Wittgenstein’s assertion about language and world, one must grasp the intellectual ferment of early twentieth-century philosophy. The period witnessed an unprecedented focus on logic as the foundation of philosophical inquiry. Wittgenstein’s predecessors and contemporaries-particularly Gottlob Frege and Bertrand Russell-had developed symbolic logic as a tool for analysing the structure of propositions and their relationship to reality.

Wittgenstein adopted and radicalised this approach. He conceived of language as fundamentally pictorial: propositions are pictures of possible states of affairs in the world.1 This “picture theory of meaning” suggests that language mirrors reality through a shared logical structure. A proposition succeeds in representing reality precisely because it shares the same logical form as the fact it depicts. Conversely, whatever cannot be pictured in language-whatever has no logical form that corresponds to possible states of affairs-lies beyond the boundaries of meaningful discourse.

This framework led Wittgenstein to a startling conclusion: most traditional philosophical problems are not genuinely solvable but rather dissolve once we recognise them as violations of logic’s boundaries.2 Metaphysical questions about the nature of consciousness, ethics, aesthetics, and the self cannot be answered because they attempt to speak about matters that transcend the logical structure of language. They are not false; they are senseless-they fail to represent anything at all.

The Limits of Language as the Limits of Thought

Wittgenstein’s proposition operates on multiple levels. First, it establishes an identity between linguistic and conceptual boundaries. We cannot think what we cannot say; the limits of language are simultaneously the limits of thought.3 This does not mean that reality itself is limited by language, but rather that our access to and comprehension of reality is necessarily mediated through the logical structures of language. What lies beyond language is not necessarily non-existent, but it is necessarily inaccessible to rational discourse and understanding.

Second, the statement reflects Wittgenstein’s conviction that logic is not merely a tool for analysing language but is constitutive of the world itself. “Logic fills the world: the limits of the world are also its limits.”3 This means that the logical structure that governs meaningful language is the same structure that governs reality. There is no gap between the logical form of language and the logical form of the world; they are isomorphic.

Third, and most radically, Wittgenstein suggests that our world-the world as we experience and understand it-is fundamentally shaped by our linguistic capacities. Different languages, with different logical structures, would generate different worlds. This insight anticipates later developments in philosophy of language and cognitive science, though Wittgenstein himself did not develop it in this direction.

Leading Theorists and Intellectual Influences

Gottlob Frege (1848-1925)

Frege, a German logician and philosopher of language, pioneered the formal analysis of propositions and their truth conditions. His distinction between sense and reference-between what a proposition means and what it refers to-profoundly influenced Wittgenstein’s thinking. Frege demonstrated that the meaning of a proposition cannot be reduced to its psychological effects on speakers; rather, meaning is an objective, logical matter. Wittgenstein adopted this objectivity whilst radicalising Frege’s insights by insisting that only propositions with determinate logical structure possess genuine sense.

Bertrand Russell (1872-1970)

Russell, Wittgenstein’s mentor at Cambridge, developed the theory of descriptions and made pioneering contributions to symbolic logic. Russell believed that logic could serve as an instrument for philosophical clarification, dissolving pseudo-problems that arose from linguistic confusion. Wittgenstein absorbed this methodological commitment but pushed it further, arguing that philosophy’s task is not to construct theories but to clarify the logical structure of language itself.2 Russell’s influence is evident throughout the Tractatus, though Wittgenstein ultimately diverged from Russell’s realism about logical objects.

Arthur Schopenhauer (1788-1860)

Though separated from Wittgenstein by decades, Schopenhauer’s pessimistic philosophy and his insistence that reality transcends rational representation deeply influenced the Tractatus. Schopenhauer argued that the world as we perceive it through the lens of space, time, and causality is merely appearance; the thing-in-itself remains forever beyond conceptual grasp. Wittgenstein echoes this distinction when he insists that value, meaning, and the self lie outside the world of facts and therefore outside the scope of language. What matters most-ethics, aesthetics, the meaning of life-cannot be said; it can only be shown through how one lives.

The Radical Implications

Wittgenstein’s claim that language limits the world carries several radical implications. First, it suggests that the expansion of language is the expansion of reality as we can know and discuss it. New concepts, new logical structures, new ways of organising experience through language literally expand the boundaries of our world. Conversely, what cannot be expressed in any language remains forever beyond our reach.

Second, it implies a profound humility about philosophy’s ambitions. If the limits of language are the limits of the world, then philosophy cannot transcend language to access some higher reality or ultimate truth. Philosophy’s proper task is not to construct metaphysical systems but to clarify the logical structure of the language we already possess.2 This therapeutic conception of philosophy-philosophy as a cure for confusion rather than a path to hidden truths-became enormously influential in twentieth-century thought.

Third, the proposition suggests that silence is not a failure of language but its proper boundary. The most important matters-how one should live, what gives life meaning, the nature of the self-cannot be articulated. They can only be demonstrated through action and lived experience. This explains Wittgenstein’s famous closing remark: “Whereof one cannot speak, thereof one must be silent.”2 This is not a counsel of despair but an acknowledgement of language’s proper limits and the realm of the inexpressible.

Legacy and Contemporary Relevance

Wittgenstein’s insight about language and world has reverberated through subsequent philosophy, cognitive science, and artificial intelligence research. The question of whether language shapes thought or merely expresses pre-linguistic thoughts remains contested, but Wittgenstein’s formulation of the problem has proven enduringly fertile. Contemporary philosophers of language, cognitive linguists, and theorists of artificial intelligence continue to grapple with the relationship between linguistic structure and conceptual possibility.

The Tractatus also established a new standard for philosophical rigour and clarity. By insisting that meaningful propositions must have determinate logical structure and correspond to possible states of affairs, Wittgenstein set a demanding criterion for philosophical discourse. Much of what passes for philosophy, he suggested, fails this test and should be recognised as senseless rather than debated as true or false.2

Remarkably, Wittgenstein himself later abandoned many of the Tractatus‘s central doctrines. In his later work, particularly the Philosophical Investigations, he rejected the picture theory of meaning and argued that language’s meaning derives from its use in diverse forms of life rather than from a single logical structure. Yet even in this later philosophy, the fundamental insight persists: understanding language is the key to understanding the limits and possibilities of human thought and experience.

Conclusion: The Enduring Insight

“The limits of my language mean the limits of my world” remains a cornerstone of modern philosophy precisely because it captures a profound truth about the human condition. We are creatures whose access to reality is necessarily mediated through language. Whatever we can think, we can think only through the conceptual and linguistic resources available to us. This is not a limitation to be lamented but a fundamental feature of human existence. By recognising this, we gain clarity about what philosophy can and cannot accomplish, and we develop a more realistic and humble understanding of the relationship between language, thought, and reality.

References

1. https://www.goodreads.com/work/quotes/3157863-logisch-philosophische-abhandlung?page=2

2. https://www.coursehero.com/lit/Tractatus-Logico-Philosophicus/quotes/

3. https://www.goodreads.com/work/quotes/3157863-logisch-philosophische-abhandlung

4. https://www.sparknotes.com/philosophy/tractatus/quotes/page/5/

5. https://www.buboquote.com/en/quote/4462-wittgenstein-what-can-be-said-at-all-can-be-said-clearly-and-what-we-cannot-talk-about-we-must-pass

“The limits of my language mean the limits of my world.” - Quote: Ludwig Wittgenstein

read more
Quote: Jensen Huang – CEO, Nvidia

Quote: Jensen Huang – CEO, Nvidia

“The U.S. led the software era, but AI is software that you don’t ‘write’-you teach it. Europe can fuse its industrial capability with AI to lead in Physical AI and robotics. This is a once-in-a-generation opportunity.” – Jensen Huang – CEO, Nvidia

In a compelling dialogue at the World Economic Forum Annual Meeting 2026 in Davos, Switzerland, Nvidia CEO Jensen Huang articulated a transformative vision for artificial intelligence, distinguishing it from traditional software paradigms and spotlighting Europe’s unique position to lead in Physical AI and robotics.1,2,4 Speaking with World Economic Forum interim co-chair Larry Fink of BlackRock, Huang emphasised AI’s evolution into a foundational infrastructure, driving the largest build-out in human history across energy, chips, cloud, models, and applications.2,3,4 This session, themed around ‘The Spirit of Dialogue,’ addressed AI’s potential to reshape productivity, labour, and global economies while countering fears of job displacement with evidence of massive investments creating opportunities worldwide.2,3

The Context of the Quote

Huang’s statement emerged amid discussions on AI as a platform shift akin to the internet and mobile cloud, but uniquely capable of processing unstructured data in real time.2 He described AI not as code to be written, but as intelligence to be taught, leveraging local language and culture as a ‘fundamental natural resource.’2,4 Turning to Europe, Huang highlighted its enduring industrial and manufacturing prowess – from skilled trades to advanced production – as a counterbalance to the US’s dominance in the software era.4 By integrating AI with physical systems, Europe could pioneer ‘Physical AI,’ where machines learn to interact with the real world through robotics, automation, and embodied intelligence, presenting a rare strategic opening.4,1

This perspective aligns with Huang’s broader advocacy for nations to develop sovereign AI ecosystems, treating it as critical infrastructure like electricity or roads.4 He noted record venture capital inflows – over $100 billion in 2025 alone – into AI-native startups in manufacturing, healthcare, and finance, underscoring the urgency for industrial regions like Europe to invest in this infrastructure to capture economic benefits and avoid being sidelined.2,4

Jensen Huang: Architect of the AI Revolution

Born in Taiwan in 1963, Jensen Huang co-founded Nvidia in 1993 with a vision to revolutionise graphics processing, initially targeting gaming and visualisation.4 Under his leadership, Nvidia pivoted decisively to AI and accelerated computing, with its GPUs becoming indispensable for training large language models and deep learning.1,2 Today, as president and CEO, Huang oversees a company valued in trillions, powering the AI boom through innovations like the Blackwell architecture and CUDA software ecosystem. His prescient bets – from CUDA’s democratisation of GPU programming to Omniverse for digital twins – have positioned Nvidia at the heart of Physical AI, robotics, and industrial applications.4 Huang’s philosophy, blending engineering rigour with geopolitical insight, has made him a sought-after voice at forums like Davos, where he champions inclusive AI growth.2,3

Leading Theorists in Physical AI and Robotics

The concepts underpinning Huang’s vision trace to pioneering theorists who bridged AI with physical embodiment. Norbert Wiener, father of cybernetics in the 1940s, laid foundational ideas on feedback loops and control systems essential for robotic autonomy, influencing early industrial automation.4 Rodney Brooks, co-founder of iRobot and Rethink Robotics, advanced ’embodied AI’ in the 1980s-90s through subsumption architecture, arguing intelligence emerges from sensorimotor interactions rather than abstract reasoning – a direct precursor to Physical AI.4

  • Yann LeCun (Meta AI chief) and Andrew Ng (Landing AI founder) extended deep learning to vision and robotics; LeCun’s convolutional networks enable machines to ‘see’ and manipulate objects, while Ng’s work on industrial AI democratises teaching via demonstration.4
  • Pieter Abbeel (Covariant) and Sergey Levine (UC Berkeley) lead in reinforcement learning for robotics, developing algorithms where AI learns dexterous tasks like grasping through trial-and-error, fusing software ‘teaching’ with hardware execution.4
  • In Europe, Wolfram Burgard (EU AI pioneer) and teams at Bosch/ Siemens advance probabilistic robotics, integrating AI with manufacturing for predictive maintenance and adaptive assembly lines.4

Huang synthesises these threads, amplified by Nvidia’s platforms like Isaac for robot simulation and Jetson for edge AI, enabling scalable Physical AI deployment.4 Europe’s theorists and firms, from DeepMind’s reinforcement learning to Germany’s Industry 4.0 initiatives, are well-placed to lead by combining theoretical depth with industrial scale.

Implications for Industrial Strategy

Huang’s call resonates with Europe’s strengths: a €2.5 trillion manufacturing sector, leadership in automotive robotics (e.g., Volkswagen, ABB), and regulatory frameworks like the EU AI Act fostering trustworthy AI.4 By prioritising Physical AI – robots that learn from human demonstration, adapt to factories, and optimise supply chains – Europe can reclaim technological sovereignty, boost productivity, and generate high-skill jobs amid the AI infrastructure surge.2,3,4

References

1. https://singjupost.com/nvidia-ceo-jensen-huangs-interview-wef-davos-2026-transcript/

2. https://www.weforum.org/stories/2026/01/nvidia-ceo-jensen-huang-on-the-future-of-ai/

3. https://www.weforum.org/podcasts/meet-the-leader/episodes/conversation-with-jensen-huang-president-and-ceo-of-nvidia-5dd06ee82e/

4. https://blogs.nvidia.com/blog/davos-wef-blackrock-ceo-larry-fink-jensen-huang/

5. https://www.youtube.com/watch?v=__IaQ-d7nFk

6. https://www.youtube.com/watch?v=RvjRuiTLAM8

7. https://www.youtube.com/watch?v=hoDYYCyxMuE

8. https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2026/sessions/conversation-with-jensen-huang-president-and-ceo-of-nvidia/

9. https://www.youtube.com/watch?v=bzC55pN9c1g

"The U.S. led the software era, but AI is software that you don't 'write'—you teach it. Europe can fuse its industrial capability with AI to lead in Physical AI and robotics. This is a once-in-a-generation opportunity." - Quote: Jensen Huang - CEO, Nvidia

read more
Term: European option

Term: European option

“A European option is a financial contract giving the holder the right, but not the obligation, to buy (call) or sell (put) an underlying asset at a predetermined strike price, but only on the contract’s expiration date, unlike American options that allow exercise anytime before expiry. ” – European option

Core definition and structure

A European option has the following defining features:1,2,3,4

  • Underlying asset – typically an equity index, single stock, bond, currency, commodity, interest rate or another derivative.
  • Option type – a call (right to buy) or a put (right to sell) the underlying asset.1,3,4
  • Strike price – the fixed price at which the underlying may be bought or sold if the option is exercised.1,2,3,4
  • Expiration date (maturity) – a single, pre-specified date on which exercise is permitted; there is no right to exercise before this date.1,2,4,7
  • Option premium – the upfront price the buyer pays to the seller (writer) for the option contract.2,4

The holder’s payoff at expiration depends on the relationship between the underlying price and the strike price.1,3,4

Payoff profiles at expiry

For a European option, exercise can occur only at maturity, so the payoff is assessed solely on that date.1,2,4,7 Let S_T denote the underlying price at expiration, and K the strike price. The canonical payoff functions are:

  • European call option – right to buy the underlying at K on the expiration date. The payoff at expiry is: \max(S_T - K, 0) . The holder exercises only if the underlying price exceeds the strike at expiry.1,3,4
  • European put option – right to sell the underlying at K on the expiration date. The payoff at expiry is: \max(K - S_T, 0) . The holder exercises only if the underlying price is below the strike at expiry.1,3,4

Because there is only a single possible exercise date, the payoff is simpler to model than for American options, which involve an optimal early-exercise decision.4,6,7

Key characteristics and economic role

Right but not obligation

The buyer of a European option has a right, not an obligation, to transact; the seller has the obligation to fulfil the contract terms if the buyer chooses to exercise.1,2,3,4 If the option is out-of-the-money on the expiration date, the buyer simply allows it to expire worthless, losing only the paid premium.2,3,4

Exercise style vs geography

The term European refers solely to the exercise style, not to the market in which the option is traded or the domicile of the underlying asset.2,4,6,7 European-style options can be traded anywhere in the world, and many options traded on European exchanges are in fact American style.6,7

Uses: hedging, speculation and income

  • Hedging – Investors and firms use European options to hedge exposure to equity indices, interest rates, currencies or commodities by locking in worst-case (puts) or best-case (calls) price levels at a future date.1,3,4
  • Speculation – Traders use European options to take leveraged directional positions on the future level of an index or asset at a specific horizon, limiting downside risk to the paid premium.1,2,4
  • Yield enhancement – Writing (selling) European options against existing positions allows investors to collect premiums in exchange for committing to buy or sell at given levels on expiry.

Typical markets and settlement

In practice, European options are especially common for:4,5,6

  • Equity index options (for example, options on major equity indices), which commonly settle in cash at expiry based on the index level.5,6
  • Cash-settled options on rates, commodities, and volatility indices.
  • Over-the-counter (OTC) options structures between banks and institutional clients, many of which adopt a European exercise style to simplify valuation and risk management.2,5,6

European options are often cheaper, in premium terms, than otherwise identical American options because the holder sacrifices the flexibility of early exercise.2,4,5,6

European vs American options

Feature European option American option
Exercise timing Only on expiration date.1,2,4,7 Any time up to and including expiration.2,4,6,7
Flexibility Lower – no early exercise.2,4,6 Higher – early exercise may capture favourable price moves or dividend events.
Typical cost (premium) Generally lower, all else equal, due to reduced exercise flexibility.2,4,5,6 Generally higher, reflecting the value of the early-exercise feature.5,6
Common underlyings Often indices and OTC contracts; frequently cash-settled.5,6 Often single-name equities and exchange-traded options.
Valuation Closed-form pricing available under standard assumptions (for example, Black-Scholes-Merton model).4 Requires numerical methods (for example, binomial trees, finite-difference methods) because of optimal early-exercise decisions.

Determinants of European option value

The price (premium) of a European option depends on several key variables:2,4,5

  • Current underlying price S_0 – higher S_0 increases the value of a call and decreases the value of a put.
  • Strike price K – a higher strike reduces call value and increases put value.
  • Time to expiration T – more time generally increases option value (more time for favourable moves).
  • Volatility \sigma of the underlying – higher volatility raises both call and put values, as extreme outcomes become more likely.2
  • Risk-free interest rate r – higher r tends to increase call values and decrease put values, via discounting and cost-of-carry effects.2
  • Expected dividends or carry – expected cash flows paid by the underlying (for example, dividends on shares) usually reduce call values and increase put values, all else equal.2

For European options, these effects are most famously captured in the Black-Scholes-Merton option pricing framework, which provides closed-form solutions for the fair values of European calls and puts on non-dividend-paying stocks or indices under specific assumptions.4

Valuation insight: put-call parity

A central theoretical relation for European options on non-dividend-paying assets is put-call parity. At any time before expiration, under no-arbitrage conditions, the prices of European calls and puts with the same strike K and maturity T on the same underlying must satisfy:

C - P = S_0 - K e^{-rT}

where:

  • C is the price of the European call option.
  • P is the price of the European put option.
  • S_0 is the current underlying asset price.
  • K is the strike price.
  • r is the continuously compounded risk-free interest rate.
  • T is the time to maturity (in years).

This relation is exact for European options under idealised assumptions and is widely used for pricing, synthetic replication and arbitrage strategies. It holds precisely because European options share an identical single exercise date, whereas American options complicate parity relations due to early exercise possibilities.

Limitations and risks

  • Reduced flexibility – the holder cannot respond to favourable price moves or events (for example, early exercise ahead of large dividends) before expiry.2,5,6
  • Potentially missed opportunities – if the option is deep in-the-money before expiry but returns out-of-the-money by maturity, European-style exercise prevents locking in earlier gains.2
  • Market and model risk – European options are sensitive to volatility, interest rates, and model assumptions used for pricing (for example, constant volatility in the Black-Scholes-Merton model).
  • Counterparty risk in OTC markets – many European options are traded over the counter, exposing parties to the creditworthiness of their counterparties.2,5

Best related strategy theorist: Fischer Black (with Scholes and Merton)

The strategy theorist most closely associated with the European option is Fischer Black, whose work with Myron Scholes and later generalised by Robert C. Merton provided the foundational pricing theory for European-style options.

Fischer Black’s relationship to European options

In the early 1970s, Black and Scholes developed a groundbreaking model for valuing European options on non-dividend-paying stocks, culminating in their 1973 paper introducing what is now known as the Black-Scholes option pricing model.4 Merton independently extended and generalised the framework in a companion paper the same year, leading to the common label Black-Scholes-Merton.

The Black-Scholes-Merton model provides a closed-form formula for the fair value of European calls and, via put-call parity, European puts under assumptions such as geometric Brownian motion for the underlying price, continuous trading, no arbitrage and constant volatility and interest rates. This model fundamentally changed how markets think about the pricing and hedging of European options, making them central instruments in modern derivatives strategy and risk management.4

Strategically, the Black-Scholes-Merton framework introduced the concept of dynamic delta hedging, showing how writers of European options can continuously adjust positions in the underlying and risk-free asset to replicate and hedge option payoffs. This insight underpins many trading, risk management and structured product strategies involving European options.

Biography of Fischer Black

  • Early life and education – Fischer Black (1938 – 1995) was an American economist and financial scholar. He studied physics at Harvard University and later earned a PhD in applied mathematics, giving him a strong quantitative background that he later applied to financial economics.
  • Professional career – Black worked at Arthur D. Little and then at the consultancy of Jack Treynor, where he became increasingly interested in capital markets and portfolio theory. He later joined the University of Chicago and then the Massachusetts Institute of Technology (MIT), where he collaborated with leading financial economists.
  • Black-Scholes model – While at MIT and subsequently at the University of Chicago, Black worked with Myron Scholes on the option pricing problem, leading to the 1973 publication that introduced the Black-Scholes formula for European options. Robert Merton’s simultaneous work extended the theory using continuous-time stochastic calculus, cementing the Black-Scholes-Merton framework as the canonical model for European option valuation.
  • Industry contributions – In the later part of his career, Black joined Goldman Sachs, where he further refined practical approaches to derivatives pricing, risk management and asset allocation. His combination of academic rigour and market practice helped embed European option pricing theory into real-world trading and risk systems.
  • Legacy – Although Black died before the 1997 Nobel Prize in Economic Sciences was awarded to Scholes and Merton for their work on option pricing, the Nobel committee explicitly acknowledged Black’s indispensable contribution. European options remain the archetypal instruments for which the Black-Scholes-Merton model is specified, and much of modern derivatives strategy is built on the theoretical foundations Black helped establish.

Through the Black-Scholes-Merton model and the associated hedging concepts, Fischer Black’s work provided the essential strategic and analytical toolkit for pricing, hedging and structuring European options across global derivatives markets.

References

1. https://www.learnsignal.com/blog/european-options/

2. https://cbonds.com/glossary/european-option/

3. https://www.angelone.in/knowledge-center/futures-and-options/european-option

4. https://corporatefinanceinstitute.com/resources/derivatives/european-option/

5. https://www.sofi.com/learn/content/american-vs-european-options/

6. https://www.cmegroup.com/education/courses/introduction-to-options/understanding-the-difference-european-vs-american-style-options.html

7. https://en.wikipedia.org/wiki/Option_style

"A European option is a financial contract giving the holder the right, but not the obligation, to buy (call) or sell (put) an underlying asset at a predetermined strike price, but only on the contract's expiration date, unlike American options that allow exercise anytime before expiry. " - Term: European option

read more
Quote: Nate B. Jones – On “Second Brains”

Quote: Nate B. Jones – On “Second Brains”

“For the first time in human history, we have access to systems that do not just passively store information, but actively work against that information we give it while we sleep and do other things-systems that can classify, route, summarize, surface, or nudge.” – Nate B. Jones – On “Second Brains”

Context of the Quote

This striking observation comes from Nate B. Jones in his video Why 2026 Is the Year to Build a Second Brain (And Why You NEED One), where he argues that human brains were never designed for storage but for thinking.1 Jones highlights the cognitive tax of forcing memory onto our minds, which leads to forgotten details in relationships and missed opportunities.1 Traditional systems demand effort at inopportune moments-like tagging notes during a meeting or drive-forcing users to handle classification, routing, and organisation in real time.1

Jones contrasts this with AI-powered second brains: frictionless systems where capturing a thought takes seconds, after which AI classifiers and routers automatically sort it into buckets like people, projects, ideas, or tasks-without user intervention.1 These systems include bouncers to filter junk, ensuring trust and preventing the ‘junk drawer’ effect that kills most note-taking apps.1 The result is an ‘AI loop’ that works tirelessly, extracting details, writing summaries, and maintaining a clean memory layer even when the user sleeps or focuses elsewhere.1

Who is Nate B. Jones?

Nate B. Jones is a prominent voice in AI strategy and productivity, running the YouTube channel AI News & Strategy Daily with over 122,000 subscribers.1 He produces content on leveraging AI for career enhancement, building no-code apps, and creating personal knowledge systems.4,5 Jones shares practical guides, such as his Bridge the Implementation Gap: Build Your AI Second Brain, which outlines step-by-step setups using tools like Notion, Obsidian, and Mem.3

His work targets knowledge workers and teams, addressing pitfalls like perfectionism and tool overload.3 In another video, How I Built a Second Brain with AI (The 4 Meta-Skills), he demonstrates offloading cognitive load through AI-driven reflection, identity debugging, and frameworks that enable clearer thinking and execution.2 Jones exemplifies rapid AI application, such as building a professional-looking travel app in ChatGPT in 25 minutes without code.4 His philosophy: AI second brains create compounding assets that reduce information chaos, boost decision-making, and free humans for deep work.3

Backstory of ‘Second Brains’

The concept of a second brain builds on decades of personal knowledge management (PKM). It gained traction with Tiago Forte, whose 2022 book Building a Second Brain popularised the CODE framework: Capture, Organise, Distil, Express. Forte’s system emphasises turning notes into actionable insights, but relies heavily on user-driven organisation-prone to failure due to taxonomy decisions at capture time.1

Pre-AI tools like Evernote and Roam Research introduced linking and search, yet still demanded active sorting.3 Jones evolves this into AI-native systems, where machine learning handles the heavy lifting: classifiers decide buckets, summarisers extract essence, and nudges surface relevance.1,3 This aligns with 2026’s projected AI maturity, making frictionless capture (under 5 seconds) viable and consistent.1

Leading Theorists in AI-Augmented Cognition

  • Tiago Forte: Pioneer of modern second brains. His PARA method (Projects, Areas, Resources, Archives) structures knowledge for action. Forte stresses ‘progressive summarisation’ to distil notes, influencing AI adaptations like Jones’s sorters and extractors.3
  • Andy Matuschak: Creator of ‘evergreen notes’ in tools like Roam. Advocates spaced repetition and networked thought, arguing brains excel at pattern-matching, not rote storage-echoed in Jones’s anti-junk-drawer bouncers.1
  • Nick Milo: Obsidian evangelist, promotes ‘linking your thinking’ via bi-directional links. His work prefigures AI surfacing of connections across notes.3
  • David Allen: GTD (Getting Things Done) founder. Introduced capture to zero cognitive load, but manual. AI second brains automate his ‘next actions’ routing.1
  • Herbert Simon: Nobel economist on bounded rationality. Coined ‘satisficing’-his ideas underpin why AI classifiers beat human taxonomy, freeing mental bandwidth.1

These theorists converge on offloading storage to amplify thinking. Jones synthesises their insights with AI, creating systems that not only store but work-classifying, nudging, and evolving autonomously.1,2,3

References

1. https://www.youtube.com/watch?v=0TpON5T-Sw4

2. https://www.youtube.com/watch?v=0k6IznDODPA

3. https://www.natebjones.com/prompts-and-guides/products/second-brain

4. https://natesnewsletter.substack.com/p/i-built-a-10k-looking-ai-app-in-chatgpt

5. https://www.youtube.com/watch?v=UhyxDdHuM0A

"For the first time in human history, we have access to systems that do not just passively store information, but actively work against that information we give it while we sleep and do other things—systems that can classify, route, summarize, surface, or nudge." - Quote: Nate B. Jones

read more
Quote: Ashwini Vaishnaw – Minister of Electronics and IT, India

Quote: Ashwini Vaishnaw – Minister of Electronics and IT, India

“ROI doesn’t come from creating a very large model; 95% of work can happen with models of 20 or 50 billion parameters.” – Ashwini Vaishnaw – Minister of Electronics and IT, India

Delivered at the World Economic Forum (WEF) in Davos 2026, this statement by Ashwini Vaishnaw, India’s Minister of Electronics and Information Technology, encapsulates a pragmatic approach to artificial intelligence deployment amid global discussions on technology sovereignty and economic impact1,2. Speaking under the theme ‘A Spirit of Dialogue’ from 19 to 23 January 2026, Vaishnaw positioned India not merely as a consumer of foreign AI but as a co-creator, emphasising efficiency over scale in model development1. The quote emerged during his rebuttal to IMF Managing Director Kristalina Georgieva’s characterisation of India as a ‘second-tier’ AI power, with Vaishnaw citing Stanford University’s AI Index to affirm India’s third-place ranking in AI preparedness and second in AI talent2.

Ashwini Vaishnaw: Architect of India’s Digital Ambition

Ashwini Vaishnaw, a chartered accountant and IAS officer of the 1994 batch (Muslim-Rajasthan cadre), has risen to become a pivotal figure in India’s technological transformation1. Appointed Minister of Electronics and Information Technology in 2021, alongside portfolios in Railways, Communications, and Information & Broadcasting, Vaishnaw has spearheaded initiatives like the India Semiconductor Mission and the push for sovereign AI1. His tenure has attracted major investments, including Google’s $15 billion gigawatt-scale AI data centre in Visakhapatnam and partnerships with Meta on AI safety and IBM on advanced chip technology (7nm and 2nm nodes)1. At Davos 2026, he outlined India’s appeal as a ‘bright spot’ for global investors, citing stable democracy, policy continuity, and projected 6-8% real GDP growth1. Vaishnaw’s vision extends to hosting the India AI Impact Summit in New Delhi on 19-20 February 2026, showcasing a ‘People-Planet-Progress’ framework for AI safety and global standards1,3.

Context: India’s Five-Layer Sovereign AI Stack

Vaishnaw framed the quote within India’s comprehensive ‘Sovereign AI Stack’, a methodical strategy across five layers to achieve technological independence within a year1,2,4. This includes:

  • Application Layer: Real-world deployments in agriculture, health, governance, and enterprise services, where India aims to be the world’s largest supplier2,4.
  • Model Layer: A ‘bouquet’ of domestic models with 20-50 billion parameters, sufficient for 95% of use cases, prioritising diffusion, productivity, and ROI over gigantic foundational models1,2.
  • Semiconductor Layer: Indigenous design and manufacturing targeting 2nm nodes1.
  • Infrastructure Layer: National 38,000 GPU compute pool and gigawatt-scale data centres powered by clean energy and Small Modular Reactors (SMRs)1.
  • Energy Layer: Sustainable power solutions to fuel AI growth2.

This approach counters the resource-intensive race for trillion-parameter models, focusing on widespread adoption in emerging markets like India, where efficiency drives economic returns2,5.

Leading Theorists on Small Language Models and AI Efficiency

The emphasis on smaller models aligns with pioneering research challenging the ‘scale-is-all-you-need’ paradigm. Andrej Karpathy, former OpenAI and Tesla AI director, has advocated for ’emergent abilities’ in models as small as 1-10 billion parameters, arguing that targeted training yields high ROI for specific tasks1,2. Noam Shazeer of Character.AI and Google co-inventor of Transformer architectures, demonstrated with models like Chinchilla (70 billion parameters) that optimal compute allocation outperforms sheer size, influencing efficient scaling laws1. Tim Dettmers, researcher behind the influential ‘llm-arxiv-daily’ repository, quantified in his ‘BitsAndBytes’ work how quantisation enables 4-bit inference on 70B models with minimal performance loss, democratising access for resource-constrained environments2.

Further, Sasha Rush (Cornell) and collaborators’ ‘Scaling Laws for Neural Language Models’ (2020) revealed diminishing returns beyond certain sizes, bolstering the case for 20-50B models1. In industry, Meta’s Llama series (7B-70B) and Mistral AI’s Mixtral 8x7B (effectively 46B active parameters) exemplify mixture-of-experts (MoE) architectures achieving near-frontier performance with lower costs, as validated in benchmarks like MMLU2. These theorists underscore Vaishnaw’s point: true power lies in diffusion and application, not model magnitude, particularly for emerging markets pursuing technology strategy5.

Vaishnaw’s insight at Davos 2026 thus resonates globally, signalling a shift towards sustainable, ROI-focused AI that empowers nations like India to lead through strategic efficiency rather than brute scale1,2.

References

1. https://economictimes.com/news/india/ashwini-vaishnaw-at-davos-2026-5-key-takeaways-highlighting-indias-semiconductor-pitch-and-roadmap-to-ai-sovereignty-at-wef/ashwini-vaishnaw-at-davos-2026-indias-tech-ai-vision-on-global-stage/slideshow/127145496.cms

2. https://timesofindia.indiatimes.com/business/india-business/its-actually-in-the-first-ashwini-vaishnaws-strong-take-on-imf-chief-calling-india-second-tier-ai-power-heres-why/articleshow/126944177.cms

3. https://www.youtube.com/watch?v=3S04vbuukmE

4. https://www.youtube.com/watch?v=VNGmVGzr4RA

5. https://www.weforum.org/stories/2026/01/live-from-davos-2026-what-to-know-on-day-2/

"ROI doesn't come from creating a very large model; 95% of work can happen with models of 20 or 50 billion parameters." - Quote: Ashwini Vaishnaw - Minister of Electronics and IT, India

read more
Term: Mercantilism

Term: Mercantilism

“Mercantilism is an economic theory and policy from the 16th-18th centuries where governments heavily regulated trade to build national wealth and power by maximizing exports, minimizing imports, and accumulating precious metals like gold and silver.” – Mercantilism

Mercantilism is an early, modern economic theory and statecraft practice (c. 16th–18th centuries) in which governments heavily regulate trade and production to increase national wealth and power by maximising exports, minimising imports, and accumulating bullion (gold and silver).3,4,2


Comprehensive definition

Mercantilism is an economic doctrine and policy regime that treats wealth as finite and international trade as a zero-sum game, so that one state’s gain is understood to be another’s loss.3,6 Under this view, the purpose of economic activity is not individual welfare but the augmentation of state power, especially in competition with rival nations.3,6

Core features include:

  • Bullionism and wealth accumulation
    Wealth is measured primarily by a country’s stock of precious metals, especially gold and silver, often called bullion.3,1,2 If a nation lacks mines, it is expected to obtain bullion through a “favourable” balance of trade, i.e. persistent export surpluses.3,2
  • Favourable balance of trade
    Governments strive to ensure exports exceed imports so that foreign buyers pay the difference in bullion.3,2,4 A favourable balance of trade is engineered via:
  • High tariffs and quotas on imports
  • Export promotion (subsidies, privileges)
  • Restrictions or bans on foreign manufactured goods2,4,5
  • Strong, interventionist state
    Mercantilism assumes an active government role in regulating the economy to serve national objectives.3,4,5 Typical interventions include:
  • Granting monopolies and charters to favoured firms or trading companies (e.g. British East India Company)4
  • Regulating wages, prices, and production
  • Directing capital to strategic sectors (ships, armaments, textiles)2,5
  • Enforcing navigation acts to reserve shipping for national fleets
  • Colonialism and economic nationalism
    Mercantilism is closely tied to the rise of nation-states and overseas empires.2,4,3 Colonies are designed to:
  • Supply raw materials cheaply to the “mother country”
  • Provide captive markets for its manufactured exports
  • Be forbidden from developing competing manufacturing industries
    All trade between colony and metropole is typically reserved as a monopoly of the mother country.3,4
  • Population, labour and social discipline
    A large population is considered essential to provide soldiers, sailors, workers and domestic consumers.3 Mercantilist states often:
  • Promote thrift and saving as virtues
  • Pass sumptuary laws limiting luxury imports, to avoid bullion outflows and keep labour disciplined3
  • Favour policies that keep wages relatively low to preserve competitiveness and employment in export industries4
  • Winners and losers
    The system tends to privilege merchants, merchant companies and the state over consumers and small producers.4 High protection raises domestic prices and lowers variety, but increases profits and state revenues through custom duties and controlled markets.2,5

As an overarching logic, mercantilism can be summarised as “economic nationalism for the purpose of building a wealthy and powerful state”.6


Mercantilism in historical context

  • Origins and dominance
    Mercantilist ideas emerged as feudalism declined and nation-states formed in early modern Europe, notably in England, France, Spain, Portugal and the Dutch Republic.1,2,4 They dominated Western European economic thinking and policy from the 16th century to the late 18th century.3,6
  • Practice rather than explicit theory
    Proponents such as Thomas Mun (England), Jean-Baptiste Colbert (France) and Antonio Serra (Italy) did not use the word “mercantilism”.3 They wrote about trade, money and statecraft; the label “mercantile system” and later “mercantilism” was coined and popularised by Adam Smith in 1776.3,4,6
  • Institutional expression
    Mercantilist policy underpinned:
  • The Navigation Acts and the rise of British sea power
  • French Colbertist industrial policy (textiles, shipbuilding, arsenals)
  • Spanish and Portuguese bullion-based imperial systems
  • Chartered companies such as the British East India Company, which fused commerce, governance and military force under state-backed monopolies4
  • Transition to capitalism and free-trade thought
    Mercantilism created conditions for early capitalism by encouraging capital accumulation, long-distance trade networks and early industrial development.3 But it also prompted a sustained intellectual backlash, most famously from Adam Smith and later classical economists, who argued that:
  • Wealth is not finite and can be expanded through productivity and specialisation
  • Free trade and comparative advantage can benefit all countries, rather than being zero-sum2,4

Critiques and legacy

Classical and later economists criticised mercantilism for:

  • Confusing money (bullion) with real wealth (productive capacity, labour, technology)2
  • Undermining consumer welfare through high prices and limited choice caused by import restrictions and monopolies2,5
  • Fostering rent-seeking alliances between state and merchant elites at the expense of the general public4,6

Although mercantilism is usually considered a superseded doctrine, many contemporary protectionist or “neo-mercantilist” policies—such as aggressive export promotion, managed exchange rates, and strategic trade restrictions—are often described as mercantilist in spirit.2,5


The key strategy theorist: Adam Smith and his relationship to mercantilism

The most important strategic thinker associated with mercantilism—precisely because he dismantled it and re-framed strategy—is Adam Smith (1723–1790), the Scottish moral philosopher and political economist often called the founder of modern economics.2,3,4,6

Although Smith was not a mercantilist, his work provides the definitive critique and strategic re-orientation away from mercantilism, and he is the thinker who named and systematised the concept.

Smith’s engagement with mercantilism

  • In An Inquiry into the Nature and Causes of the Wealth of Nations, Smith repeatedly refers to the existing policy regime as the “mercantile system” and subjects it to a detailed historical and analytical critique.3,4,6
  • He argues that:
  • National wealth lies in the productive powers of labour and capital, not in the mere accumulation of gold and silver.2,6
  • Free exchange and competition, not monopolies and trade restraints, are the most reliable mechanisms for increasing overall prosperity.
  • International trade can be mutually beneficial, rejecting the zero-sum assumption central to mercantilism.2,4
  • Smith maintains that mercantilism benefits a narrow coalition of merchants and manufacturers, who use state power—tariffs, monopolies, trading charters—to secure rents at the expense of the wider population.4,6

In strategic terms, Smith redefined economic statecraft: instead of seeking power through hoarding bullion and favouring particular firms, he proposed that long-run national strength is best served by efficient markets, specialisation and limited government interference.

Biographical sketch and intellectual formation

  • Early life and education
    Adam Smith was born in Kirkcaldy, Scotland, in 1723.3 He studied at the University of Glasgow, where he encountered the Scottish Enlightenment’s emphasis on reason, moral philosophy and political economy, and later at Balliol College, Oxford.3,6
  • Academic and public roles
    He became Professor of Logic and later Moral Philosophy at the University of Glasgow, lecturing on ethics, jurisprudence, and political economy.6 His first major work, The Theory of Moral Sentiments, explored sympathy, virtue and the moral foundations of social order.
  • European travels and observation of mercantilist systems
    From 1764 to 1766, Smith travelled in France and Switzerland as tutor to the Duke of Buccleuch, meeting leading physiocrats and observing French administrative and mercantilist practices first-hand.6 These experiences sharpened his critique of existing systems and influenced his articulation of freer trade and limited government.
  • The Wealth of Nations and its impact
    Published in 1776,The Wealth of Nations systematically:
  • Dissects mercantilist doctrines and practices across Britain and Europe
  • Explains the division of labour, market coordination and the role of self-interest under appropriate institutional frameworks
  • Sets out a strategic blueprint for economic policy based on “natural liberty”, moderate taxation, minimal trade barriers and competitive markets2,4,6

Smith died in 1790 in Edinburgh, but his analysis of mercantilism reshaped both economic theory and state strategy. Governments gradually moved—unevenly and often incompletely—from mercantilist controls toward liberal, market-oriented trade regimes, making Smith the key intellectual bridge between mercantilist economic nationalism and modern strategic thinking about trade, growth and state power.

 

References

1. https://legal-resources.uslegalforms.com/m/mercantilism

2. https://corporatefinanceinstitute.com/resources/economics/mercantilism/

3. https://www.britannica.com/money/mercantilism

4. https://www.ebsco.com/research-starters/diplomacy-and-international-relations/mercantilism

5. https://www.economicshelp.org/blog/17553/trade/mercantilism-theory-and-examples/

6. https://www.econlib.org/library/Enc/Mercantilism.html

7. https://dictionary.cambridge.org/us/dictionary/english/mercantilism

 

"Mercantilism is an economic theory and policy from the 16th-18th centuries where governments heavily regulated trade to build national wealth and power by maximizing exports, minimizing imports, and accumulating precious metals like gold and silver." - Term: Mercantilism

read more
Quote: J.P. Morgan – On resources

Quote: J.P. Morgan – On resources

“We believe the clean technology transition is igniting a new supercycle in critical commodities, with natural resource companies emerging as winners.” – J.P. Morgan – On resources

When J.P. Morgan Asset Management framed the clean technology transition in these terms, it captured a profound shift underway at the intersection of climate policy, industrial strategy and global capital allocation.1,5 The quote stands at the heart of their analysis of how decarbonisation is reshaping demand for metals, minerals and energy, and why this is likely to support elevated commodity prices for years rather than months.1

The immediate context is the rapid acceleration of the energy transition. Governments have committed to net zero pathways, corporates face growing regulatory and investor pressure to decarbonise, and consumers are adopting electric vehicles and clean technologies at scale. J.P. Morgan argues that this is not merely an environmental story, but an economic retooling comparable in scale to previous industrial revolutions.1,4

Their research highlights two linked dynamics. First, the decarbonised economy is less fuel-intensive but far more materials-intensive. Replacing fossil fuel power with renewables requires vast quantities of copper, aluminium, nickel, lithium, cobalt, manganese and graphite to build solar and wind farms, grids and storage systems.1 Second, the speed of this transition matters as much as its direction. Even under conservative scenarios, J.P. Morgan estimates substantial increases in demand for critical minerals by 2030; under more ambitious net zero pathways, demand could rise by around 110% over that period, on top of the 50% increase already seen in the previous decade.1

In this framing, natural resource companies – particularly miners and producers of critical minerals – shift from being perceived purely as part of the old carbon-heavy economy to being central enablers of clean technologies. J.P. Morgan points out that while fossil fuel demand will decline over time, the scale of required investment in metals and minerals, as well as transmission infrastructure, effectively re-ranks many resource businesses as strategic assets for the low-carbon future.1 Valuations that once reflected cyclical, late-stage industries may therefore underestimate the structural demand embedded in net zero commitments.

The quote also reflects J.P. Morgan’s broader thinking on commodity and energy supercycles. Their research on energy markets describes a supercycle as a sustained period of elevated prices driven by structural forces that can last for a decade or more.3,4 In previous eras, those forces included post-war reconstruction and the rise of China as the world’s industrial powerhouse. Today, they see the combination of chronic underinvestment in supply, intensifying climate policy, and rising demand for both traditional and clean energy as setting the stage for a new, complex supercycle.2,3,4

Within the firm, analysts have argued that higher-for-longer interest rates raise the cost of debt and equity for energy producers, reinforcing supply discipline and pushing up the marginal cost of production.3 At the same time, the rapid build-out of renewables is constrained by supply chain, infrastructure and key materials bottlenecks, meaning that legacy fuels still play a significant role even as capital increasingly flows towards clean technologies.3 This dual dynamic – structural demand for critical minerals on the one hand and a constrained, more disciplined fossil fuel sector on the other – underpins the conviction that a supercycle is forming across parts of the commodity complex.

The idea of commodity supercycles predates the current climate transition and has been shaped by several generations of theorists and empirical researchers. In the mid-20th century, economists such as Raúl Prebisch and Hans Singer first highlighted the long-term terms-of-trade challenges faced by commodity exporters, noting that prices for primary products tended to fall relative to manufactured goods over time. Their work prompted an early focus on structural forces in commodity markets, although it emphasised long-run decline rather than extended booms.

Later, analysts began to examine multi-decade patterns of rising and falling prices. Structural models of commodity prices observed that at major stages of economic development – such as the agricultural and industrial revolutions – commodity intensity tends to increase markedly, creating conditions for supercycles.4 These models distinguish between business cycles of a few years, investment cycles spanning roughly a decade, and longer supercycle components that can extend beyond 20 years.4 The supercycle lens gained prominence as researchers studied the commodity surge associated with China’s breakneck urbanisation and industrialisation from the late 1990s to the late 2000s.

That China-driven episode became the archetype of a modern commodity supercycle: a powerful, sustained demand shock focused on energy, metals and bulk materials, amplified by long supply lead times and capital expenditure cycles. J.P. Morgan and other institutions have documented how this supercycle drove a 12-year uptrend in prices, culminating before the global financial crisis, followed by a comparably long down-cycle as supply eventually caught up and Chinese growth shifted to a less resource-intensive model.2,4

Academic and market theorists have since refined the concept. They argue that supercycles emerge when three elements coincide. First, there must be a structural, synchronised increase in demand, often tied to a global development episode or technological shift. Second, supply in key commodities must be constrained by geology, capital discipline, regulation or long project lead times. Third, macro-financial conditions – including real interest rates, inflation expectations and currency trends – must align to support investment flows into real assets. The question for today’s transition is whether decarbonisation meets these criteria.

On the demand side, the clean tech revolution clearly resembles previous development stages in its resource intensity. J.P. Morgan notes that electric vehicles require significantly more minerals than internal combustion engine cars – roughly six times as much in aggregate when accounting for lithium, nickel, cobalt, manganese and graphite.1 Similarly, building solar and wind capacity, and the vast grid infrastructure to connect them, calls for much more copper and aluminium per unit of capacity than conventional power systems.1 The International Energy Agency’s projections, which J.P. Morgan draws on, indicate that even under modest policy assumptions, renewable electricity capacity is set to increase by around 50% by 2030, with more ambitious net zero scenarios implying far steeper growth.1

Supply, however, has been shaped by a decade of caution. After the last supercycle ended, many mining and energy companies cut back capital expenditure, streamlined balance sheets and prioritised shareholder returns. Regulatory processes for new mines lengthened, environmental permitting became more stringent, and social expectations around land use and community impacts increased. The result is that bringing new supplies of copper, nickel or lithium online can take many years and substantial capital, creating a lag between price signals and physical supply.

Theorists of the investment cycle – often identified with work on 8 to 20-year intermediate commodity cycles – argue that such periods of underinvestment sow the seeds for the next up-cycle.4 When demand resurges due to a structural driver, constrained supply leads to persistent price pressures until investment, technology and substitution can rebalance the market. In the case of the energy transition, the requirement for large amounts of specific minerals, combined with concentrated supply in a small number of countries, intensifies this effect and introduces geopolitical considerations.

Another important strand of thought concerns the evolution of energy systems themselves. Analysts focusing on energy supercycles emphasise that transitions historically unfold over multiple decades and rarely proceed smoothly.3,4 Even as clean energy capacity expands rapidly, global energy demand continues to grow, and existing systems must meet rising consumption while new infrastructure is built. J.P. Morgan’s energy research describes this as a multi-decade process of “generating and distributing the joules” required to both satisfy demand and progressively decarbonise.3 During this period, traditional energy sources often remain critical, creating complex price dynamics across oil, gas, coal and renewables-linked commodities.

Within this broader theoretical frame, the clean technology transition can be seen as a distinctive supercycle candidate. Unlike the China wave, which centred on industrialisation and urbanisation within one country, the net zero agenda is globally coordinated and policy-driven. It spans power generation, transport, buildings, industry and agriculture, and requires both new physical assets and digital infrastructure. Structural models referenced by J.P. Morgan note that such system-wide investment programmes have historically been associated with sustained periods of elevated commodity intensity.4

At the same time, there is active debate among economists and market strategists about the durability and breadth of any new supercycle. Some caution that efficiency gains, recycling and substitution could cap demand growth in certain minerals over time. Others point to innovation in battery chemistries, alternative materials and manufacturing methods that may reduce reliance on some critical inputs. Still others argue that policy uncertainty and potential fragmentation in global trade could disrupt smooth investment and demand trajectories. Theorists of supercycles emphasise that these are not immutable laws but emergent patterns that can be shaped by technology, politics and finance.

J.P. Morgan’s perspective in the quoted insight acknowledges these uncertainties while underscoring the asymmetry in the coming decade. Even in conservative scenarios, their work suggests that demand for critical minerals rises substantially relative to recent history.1 Under more ambitious climate policies, the increase is far greater, and tightness in markets such as copper, nickel, cobalt and lithium appears likely, especially towards the end of the 2020s.1 Against this backdrop, natural resource companies with high-quality assets, disciplined capital allocation and credible sustainability strategies are positioned not as relics of the past, but as essential partners in delivering the energy transition.

This reframing has important implications for investors and corporates alike. For investors, it suggests that the traditional division between “old” resource-heavy industries and “new” clean tech sectors is too simplistic. The hardware of decarbonisation – from EV batteries and charging networks to grid-scale storage, wind turbines and solar farms – depends on a complex upstream ecosystem of miners, processors and materials specialists. For corporates, it highlights the strategic premium on securing access to critical inputs, managing long-term supply contracts, and integrating sustainability into resource development.

The quote from J.P. Morgan thus sits at the confluence of three intellectual streams: long-run theories of commodity supercycles, modern analysis of energy transition dynamics, and evolving views of how natural resource businesses fit into a low-carbon world. It encapsulates the idea that the path to net zero is not dematerialised; instead, it is anchored in physical assets, industrial capabilities and supply chains that must be financed, built and operated over many years. For those able to navigate this terrain – and for the theorists tracing its contours – the clean technology transition is not only an environmental imperative but also a defining economic narrative of the coming decades.

References

1. https://am.jpmorgan.com/hk/en/asset-management/adv/insights/market-insights/market-bulletins/clean-energy-investment/

2. https://www.foxbusiness.com/markets/biden-climate-change-fight-commodities-supercycle

3. https://www.jpmorgan.com/insights/global-research/commodities/energy-supercycle

4. https://www.jpmcc-gcard.com/digest-uploads/2021-summer/Page%2074_79%20GCARD%20Summer%202021%20Jerrett%20042021.pdf

5. https://am.jpmorgan.com/us/en/asset-management/institutional/card-list-libraries/sustainable-insights-climate-tab-us/

6. https://www.jpmorgan.com/insights/global-research/outlook/market-outlook

7. https://www.bscapitalmarkets.com/hungry-for-commodities-ndash-is-a-new-commodity-super-cycle-here.html

"We believe the clean technology transition is igniting a new supercycle in critical commodities, with natural resource companies emerging as winners." - Quote: J.P. Morgan

read more
Term: Moltbot (formerly Clawdbot)

Term: Moltbot (formerly Clawdbot)

“Moltbot (formerly Clawdbot), a personal AI assistant, has gone viral within weeks of its launch, drawing thousands of users willing to tackle the technical setup required, even though it started as a scrappy personal project built by one developer for his own use.” – Moltbot (formerly Clawdbot)

Moltbot (formerly Clawdbot) is an open-source, self-hosted personal AI assistant that runs continuously on your own hardware (for example a Mac mini, Raspberry Pi, old laptop, or low-cost cloud server) and connects to everyday messaging channels such as WhatsApp, Telegram, iMessage, or similar chat apps so that you can talk to it as if it were a human teammate rather than a traditional app.

Instead of living purely in the cloud like many mainstream assistants, it is designed as “an AI that actually does things”: it can execute real commands on your machine, including managing your calendar and email, browsing the web, organizing local files, and running terminal commands or scripts under your control.

At its core, Moltbot is an agentic system: you choose and configure the underlying large language model (Anthropic Claude, OpenAI models, or local models), and Moltbot wraps that model with tools and permissions so that the AI can observe state on your computer, decide on a sequence of actions, and iteratively move from a current state toward a desired state, much closer to a junior digital employee than a simple chatbot.

This agentic design makes it valuable for complex, multi-step workflows such as triaging inbound email, preparing briefings from documents and web sources, or orchestrating routine maintenance tasks, with the human defining objectives and guardrails while the assistant executes within those constraints. The project emphasizes a privacy-first, owner-controlled architecture: your prompts, files, and system access stay on the machine you control, with only model calls leaving the device when you opt to use a remote API, a proposition that has resonated strongly with developers and power users wary of funneling sensitive workstreams through opaque cloud ecosystems.

Moltbot’s origin story reinforces this positioning: it began in late 2025 as a scrappy personal project by Austrian engineer Peter Steinberger, best known for founding PSPDFKit (later rebranded Nutrient), a PDF and document-processing SDK that grew into infrastructure used by hundreds of millions of end users before being acquired by Insight Partners.

After exiting PSPDFKit and stepping away from day-to-day coding, Steinberger described a period of creative exhaustion, only to be pulled back into building when the momentum around modern AI—and especially Anthropic’s Claude models—convinced him he could turn “Claude Code into his computer,” effectively treating an AI coding environment and agent as the primary interface to his machine.

The first iteration of his assistant, Clawdbot (with its mascot character “Clawd,” a playful space lobster inspired by the name Claude), was built astonishingly quickly—early prototypes reportedly took around an hour—and shared as a personal tool that showed how an AI, wired into real system capabilities, could meaningfully reduce friction in managing a digital life.

Once Steinberger released the project publicly, traction was explosive: the repository rapidly attracted tens of thousands of GitHub stars (with some reports noting 50,000–60,000 stars within weeks), a fast-growing contributor base, and an active community Discord, as developers experimented with running Moltbot as a 24/7 “full-time AI employee” on cheap hardware.

Media coverage highlighted its distinctive blend of autonomy and practicality—“Claude with hands” rather than just a conversational agent—and its appeal to technically sophisticated users willing to accept a more involved setup process in exchange for real, system-level leverage over their workflows.

A trademark dispute over the similarity between “Clawd” and Anthropic’s “Claude” forced a rebrand to Moltbot in early 2026, but the underlying architecture, community, and “lobster soul” of the project remained intact, underscoring that the real innovation lies in the pattern of a self-hosted, action-oriented personal AI rather than in the specific name.

From a strategic perspective, Moltbot represents an emergent archetype: the personal AI infrastructure or “personal operating system” where an individual deploys a modular, agentic system on their own stack, integrates it tightly with their tools, and iteratively composes new capabilities over time.

This pattern shifts AI from being a generic productivity overlay to becoming part of the user’s core execution engine: instead of repeatedly solving the same problem, owners encapsulate solutions into reusable modules or “skills” that their assistant can call, turning one-off hacks into compounding leverage across research, coding, administration, and communication workflows.

In practice, this means that Moltbot is less a single product than a reference architecture for what it looks like when an individual or small team runs a persistent, deeply customized AI agent alongside them as a standing capability, blurring the line between software tool, co-worker, and infrastructure.

Strategy theorist: Daniel Miessler and the personal AI infrastructure thesis

Among contemporary strategic thinkers, Daniel Miessler offers one of the most closely aligned conceptual frameworks for understanding what Moltbot represents, through his work on “Personal AI Infrastructure (PAI)” and modular, agentic systems such as his own AI stack named “Kai.”

Miessler approaches AI not as a single application but as an evolving strategic platform: he describes PAI as an architecture built around a simple yet powerful iterative algorithm—current state – desired state via verifiable iteration—implemented through a constellation of agents, tools, and skills that together execute work on the owner’s behalf.

In his model, effective personal AI systems follow a clear hierarchy—goal – code – command-line tools – prompts – agents—so that automation is applied where it creates lasting leverage rather than superficial convenience, a philosophy that mirrors the way Moltbot encourages users first to define what they want done, then wire the assistant into concrete system actions.

Miessler’s backstory helps explain why his thinking is so relevant to Moltbot’s emergence. He is a long-time security and technology practitioner and the author of a widely read blog and podcast focused on the intersection of infosec, technology, and human behavior, where he has chronicled the gradual shift from isolated tools toward integrated, self-improving AI ecosystems.

Over the past several years he has documented building Kai as a unified agentic system to augment his own research and content creation, distilling a set of design principles: treat skills as modular units of domain expertise, maintain a custom history system that captures everything the system learns, and design both permanent specialist agents and dynamic agents that can be composed on demand for specific tasks.

These principles closely parallel what power users now attempt with Moltbot: they create persistent agents for recurring roles (research, coding, operations), attach them to specific tools and datasets, and then spin up temporary, task-specific flows as new problems arise, all running on personal or small-team infrastructure rather than within a vendor’s closed-box SaaS product.

The relationship between Miessler’s strategic ideas and Moltbot is best understood as conceptual rather than personal: Moltbot independently operationalizes many of the architectural patterns Miessler describes, turning the “personal AI infrastructure” thesis into a widely accessible, open-source implementation.

Both center on the same strategic shift: from AI as an occasional assistant that helps draft text, to AI as a continuously running, modular execution layer that acts across a user’s entire digital environment under explicit human objectives and constraints. In this sense, Miessler functions as a strategy theorist of the personal AI era, articulating the logic of agentic, owner-controlled systems, while Moltbot provides a vivid, viral case study of those ideas in practice—demonstrating how a single, well-designed personal AI stack can evolve from a private experiment into a community-driven platform that meaningfully changes how individuals and small firms execute work.

References

1. https://techcrunch.com/2026/01/27/everything-you-need-to-know-about-viral-personal-ai-assistant-clawdbot-now-moltbot/

2. https://metana.io/blog/what-is-moltbot-everything-you-need-to-know-in-2026/

3. https://dev.to/sivarampg/clawdbot-the-ai-assistant-thats-breaking-the-internet-1a47

4. https://www.macstories.net/stories/clawdbot-showed-me-what-the-future-of-personal-ai-assistants-looks-like/

5. https://www.youtube.com/watch?v=U8kXfk8en

"Moltbot (formerly Clawdbot), a personal AI assistant, has gone viral within weeks of its launch, drawing thousands of users willing to tackle the technical setup required, even though it started as a scrappy personal project built by one developer for his own use." - Term: Moltbot (formerly Clawdbot)

read more
Quote: Kristalina Georgieva – Managing Director, IMF

Quote: Kristalina Georgieva – Managing Director, IMF

“My main message here is the following: this is a tsunami hitting the labour market, and even in the best-prepared countries, I don’t think we are prepared enough.” – Kristalina Georgieva – Managing Director, IMF

Kristalina Georgieva’s invocation of a “tsunami” represents far more than rhetorical flourish. Speaking at the World Economic Forum in Davos, the Managing Director of the International Monetary Fund articulated a diagnosis grounded in rigorous empirical analysis: artificial intelligence is not a speculative future threat but an immediate force already reshaping employment across every economy on earth. The metaphor itself carries profound significance-a tsunami denotes not merely disruption but overwhelming force, simultaneity, and inevitability. Critically, Georgieva’s acknowledgement that even “best-prepared countries” remain inadequately equipped reveals the unprecedented scale and speed of this transformation.

The Scope of AI’s Labour Market Impact

The International Monetary Fund’s assessment provides quantifiable dimensions to this disruption. Georgieva’s research indicates that 40 per cent of jobs globally will be impacted by artificial intelligence, with each affected role falling into one of three categories: enhancement (where AI augments human capability), elimination (where automation replaces human labour), or transformation (where roles are fundamentally altered). In advanced economies, this figure rises to 60 per cent-a staggering proportion that underscores the concentration of AI disruption in wealthy nations with greater technological infrastructure.

The distinction between jobs “touched” by AI and jobs eliminated proves crucial to understanding Georgieva’s analysis. Enhancement and transformation may appear preferable to outright elimination, yet they still demand worker adjustment, skill development, and potentially geographic mobility. A job that is transformed but offers no wage improvement-as Georgieva has noted-may be economically worse for the worker even if technically retained. This nuance separates her analysis from both techno-optimist narratives and catastrophic predictions.

Perhaps most concerning is the asymmetric impact across age cohorts and development levels. Georgieva has specifically warned that AI is “like a tsunami hitting the labour market” for younger people entering the workforce. Entry-level positions-historically the gateway through which workers develop skills, build experience, and establish career trajectories-are precisely those most vulnerable to automation. This threatens to disrupt the intergenerational transmission of economic opportunity that has underpinned social mobility for decades.

Theoretical Foundations: The Labour Economics Lineage

Georgieva’s analysis draws on decades of rigorous labour economics scholarship examining technological displacement and labour market adjustment. The intellectual lineage traces to David Autor, a leading MIT economist whose research has fundamentally shaped contemporary understanding of how technological change reshapes employment. Autor’s seminal work demonstrates that whilst technology eliminates routine tasks-particularly routine cognitive work-it simultaneously creates demand for new skills and complementary labour. However, this adjustment is neither automatic nor painless; workers displaced from routine cognitive tasks often face years of unemployment or underemployment before transitioning to new roles, if they transition at all.

Autor’s research, conducted over more than two decades, reveals a critical pattern: technological disruption creates a “hollowing out” of middle-skill employment. Routine cognitive tasks-data entry, basic accounting, straightforward analysis-have been progressively automated, whilst demand has polarised toward high-skill, high-wage positions and low-skill, low-wage service roles. This pattern, documented extensively in his work on computerisation and wage inequality, provides the empirical foundation for understanding why Georgieva emphasises that AI’s impact cannot be left to market forces alone.

Building on Autor’s framework, contemporary labour economists have extended analysis to examine the speed and scale of technological transition. The consensus among leading researchers-including Daron Acemoglu of MIT, who has written extensively on the relationship between technology and inequality-is that rapid technological change without deliberate policy intervention tends to exacerbate inequality rather than distribute gains broadly. Acemoglu’s work emphasises that technology is not destiny; rather, the distributional outcomes of technological change depend fundamentally on institutional choices, regulatory frameworks, and investment in human capital.

Claudia Goldin, the 2023 Nobel Prize winner in Economics, has contributed essential research on the relationship between education, skills, and labour market outcomes across generations. Her historical analysis demonstrates that periods of rapid technological change have previously required corresponding investments in education and skills development. The gap between technological capability and educational preparedness has historically determined whether technological transitions benefit broad populations or concentrate gains among a narrow elite. Georgieva’s warning about inadequate preparedness echoes Goldin’s historical findings: without deliberate educational investment, technological transitions produce inequality.

The Productivity Paradox and Global Growth

Georgieva’s analysis situates AI within a broader economic context of disappointing productivity growth. Global growth has remained underwhelming in recent years, with productivity growth stagnant except in the United States. This stagnation represents a fundamental economic problem: without productivity growth, living standards stagnate, and governments face fiscal pressures as tax revenues fail to grow with economic output.

AI represents, in Georgieva’s assessment, the most potent force for reversing this trend. The IMF calculates that AI could boost global growth between 0.1 and 0.8 per cent annually-a seemingly modest range that carries enormous consequences. A 0.8 per cent productivity gain would restore growth to pre-pandemic levels, fundamentally altering global economic trajectories. Yet this upside scenario depends entirely on successful labour market adjustment and equitable distribution of AI’s benefits. If AI generates productivity gains that concentrate wealth whilst displacing workers without adequate transition support, the aggregate growth figures mask profound distributional consequences.

This productivity question connects directly to Georgieva’s warning about preparedness. The IMF’s research indicates that one in ten jobs in advanced economies already require substantially new skills-a figure that will accelerate as AI deployment expands. Yet educational and training systems globally remain poorly aligned with AI-era skill demands. Northern European countries-particularly Finland, Sweden, and Denmark-have historically invested in continuous skills development and educational flexibility, positioning their workforces better for technological transition. Most other nations, by contrast, maintain educational systems designed for industrial-era employment patterns, where workers acquired specific skills early in their careers and applied them throughout working lives.

The Global Inequality Dimension

Perhaps the most consequential aspect of Georgieva’s analysis concerns the “accordion of opportunities”-her term for the diverging economic trajectories between advanced and developing economies. The 60 per cent figure for advanced economies versus 20-26 per cent for low-income countries reflects not merely different levels of AI adoption but fundamentally different economic capacities and institutional frameworks.

Advanced economies possess the infrastructure, capital, and institutional capacity to invest in AI whilst simultaneously managing labour market transition. They have educational systems capable of rapid adaptation, financial resources to fund reskilling programmes, and social safety nets to cushion displacement. Low-income countries risk being left behind-neither benefiting from AI’s productivity gains nor receiving the investment in skills and social protection that might cushion displacement. This dynamic threatens to widen the global inequality gap that has been a persistent feature of economic development since the industrial revolution.

Georgieva’s concern reflects research by economists including Branko Milanovic, who has documented how technological change interacts with global inequality. Milanovic’s work demonstrates that technological transitions have historically benefited capital owners and high-skill workers whilst displacing lower-skill workers. Without deliberate policy intervention-progressive taxation, investment in education, social protection-technological change tends to increase inequality both within and between nations.

The Skills Gap and Educational Mismatch

Georgieva’s analysis reveals a critical finding: some countries have more demand for new skills than supply, whilst others have more supply than demand. This mismatch is not random; it reflects decades of educational investment decisions. Northern European countries, which have invested continuously in education and skills development, face less severe skills gaps. Emerging market and developing economies, which have often prioritised other investments, face more significant misalignment between labour supply and employer demand.

The nature of required skills further complicates adjustment. Approximately half of new skills demanded are information technology related-programming, data analysis, AI system management. The remaining skills span management, specific professional qualifications, and crucially, what Georgieva terms “learning how to learn.” This last category proves essential because, as she emphasises, policymakers cannot assume they know what jobs of tomorrow will be. Rather than teaching particular knowledge, educational systems must cultivate adaptability and continuous learning capacity.

This pedagogical insight reflects research by Erik Brynjolfsson and Andrew McAfee, economists at MIT who have extensively studied the relationship between technological change and employment. Their research emphasises that in periods of rapid technological change, the ability to learn new skills matters more than possession of specific technical knowledge. Workers who can adapt, learn new tools, and transfer skills across domains fare better than those with deep expertise in narrow domains vulnerable to automation.

The Entry-Level Jobs Crisis

Georgieva’s specific warning about entry-level positions deserves particular attention. AI tends to eliminate entry-level functions-the positions through which younger workers historically entered labour markets, developed experience, and progressed to more senior roles. This threatens to disrupt a fundamental mechanism of economic mobility and skills development.

The concern extends beyond immediate employment. Entry-level positions serve crucial functions beyond income generation: they provide work experience, develop professional networks, teach workplace norms and expectations, and signal to employers that workers possess basic competence. When AI eliminates these positions, younger workers face not merely reduced job availability but disrupted pathways to career development. A 25-year-old unable to secure entry-level experience faces substantially different career prospects than one who progresses through conventional career ladders.

Yet Georgieva’s data also offers grounds for cautious optimism. Her research indicates that a 1 per cent increase in new skills leads to 1.3 per cent increase in overall employment. This suggests that skill development creates positive spillovers-workers with new skills generate demand for complementary services and lower-skilled labour, expanding employment opportunities across the economy. The fear that AI will shrink total employment, whilst understandable, is not yet supported by empirical evidence. Rather, the challenge is reshaping employment-ensuring that displaced workers can transition to new roles and that new opportunities emerge in sufficient quantity and geographic proximity to displaced workers.

Geopolitical and Strategic Dimensions

Georgieva’s warning arrives amid broader economic fragmentation. Trade tensions, geopolitical competition, and the shift from a rules-based global economic order toward competing blocs create additional uncertainty. AI development is increasingly intertwined with strategic competition between major powers, particularly between the United States and China. This geopolitical dimension means that AI’s labour market impact cannot be separated from questions of technological sovereignty, supply chain resilience, and economic security.

The strategic competition over AI development creates perverse incentives. Nations may prioritise rapid AI deployment to maintain competitive advantage, even when labour market adjustment remains incomplete. This dynamic could accelerate job displacement without corresponding investment in worker transition support, exacerbating the preparedness gap Georgieva identifies.

Policy Imperatives and the Preparedness Challenge

Georgieva’s analysis suggests several imperatives for policymakers. First, labour market adjustment cannot be left to market forces alone; deliberate investment in education, training, and social protection is essential. Second, the distribution of AI’s benefits matters as much as aggregate productivity gains; without attention to equity, AI could deepen inequality within and between nations. Third, regulation and ethical frameworks must be established proactively rather than reactively, shaping AI development toward socially beneficial outcomes.

The preparedness challenge Georgieva emphasises reflects a fundamental asymmetry: AI development proceeds at technological pace, whilst educational systems, labour market institutions, and policy frameworks change at institutional pace. Educational systems require years to redesign curricula, train teachers, and produce graduates with new skills. Labour market institutions-unemployment insurance systems, pension arrangements, occupational licensing frameworks-were designed for industrial-era employment patterns and adapt slowly to new realities. Policy frameworks require legislative action, which moves even more slowly.

This temporal mismatch between technological change and institutional adaptation explains why even well-prepared countries remain inadequately equipped. Finland, Sweden, and Denmark-the countries Georgieva identifies as best positioned-have invested continuously in education and skills development, yet even these nations acknowledge that current preparedness remains insufficient for the scale and speed of AI-driven change.

The Broader Economic Context

Georgieva’s warning must be understood within the context of her broader economic outlook. The IMF has upgraded global growth projections to 3.3 per cent for 2026 and 3.2 per cent for 2027, yet these figures fall short of pre-pandemic historical averages of 3.8 per cent. The primary constraint on growth is productivity-the output generated per unit of labour and capital. Without productivity growth, economies cannot generate sufficient income growth to fund public services, support ageing populations, or improve living standards.

AI represents the most significant potential source of productivity growth available to policymakers. Yet realising this potential requires not merely deploying AI technology but managing the labour market transition it necessitates. Georgieva’s warning that even best-prepared countries remain inadequately equipped reflects recognition that the challenge is not technological but institutional and political-whether societies can muster the will to invest in worker transition, education, and social protection whilst simultaneously deploying transformative technology.

The stakes could hardly be higher. Successful management of AI’s labour market impact could restore productivity growth, accelerate global development, and improve living standards broadly. Failure to manage this transition adequately could concentrate AI’s benefits among capital owners and high-skill workers whilst displacing millions of workers without adequate transition support, deepening inequality and potentially destabilising societies. Georgieva’s metaphor of a tsunami captures this duality: the same force that could lift all boats could also devastate those unprepared for its arrival.

References

1. https://globaladvisors.biz/2026/01/23/quote-kristalina-georgieva-managing-director-imf/

2. https://www.weforum.org/podcasts/meet-the-leader/episodes/ai-skills-global-economy-imf-kristalina-georgieva/

3. https://fortune.com/2026/01/23/imf-chief-warns-ai-tsunami-entry-level-jobs-gen-z-middle-class/

4. https://timesofindia.indiatimes.com/education/careers/news/ai-is-hitting-entry-level-jobs-like-a-tsunami-imf-chief-kristalina-georgieva-urges-students-to-prepare-for-change/articleshow/127381917.cms

"My main message here is the following: this is a tsunami hitting the labour market, and even in the best-prepared countries, I don't think we are prepared enough." - Quote: Kristalina Georgieva - Managing Director, IMF

read more
Term: Black Scholes

Term: Black Scholes

“The Black-Scholes model (or Black-Scholes-Merton model) is a fundamental mathematical formula that calculates the theoretical fair price of European-style options, using inputs like the underlying stock price, strike price, time to expiration, risk-free interest rate and volatility.” – Black Scholes

Black-Scholes Model (Black-Scholes-Merton Model)

The Black-Scholes model, also known as the Black-Scholes-Merton model, is a pioneering mathematical framework for pricing European-style options, which can only be exercised at expiration. It derives a theoretical fair value for call and put options by solving a parabolic partial differential equation—the Black-Scholes equation—under risk-neutral valuation, replacing the asset’s expected return with the risk-free rate to eliminate arbitrage opportunities.1,2,5

Core Formula and Inputs

The model prices a European call option ( C ) as:

C = S_0 N(d_1) - K e^{-rT} N(d_2)

where:

  • ( S_0 ): current price of the underlying asset (e.g., stock).3,7
  • ( K ): strike price.5,7
  • ( T ): time to expiration (in years).5,7
  • ( r ): risk-free interest rate (constant).3,7
  • (\sigma ): volatility of the underlying asset’s returns (annualised).2,7
  • ( N(\cdot) ): cumulative distribution function of the standard normal distribution.
  • d_1 = \frac{\ln(S_0 / K) + (r + \sigma^2 / 2)T}{\sigma \sqrt{T}}
  • d_2 = d_1 - \sigma \sqrt{T}1,2,5

A symmetric formula exists for put options. The model assumes log-normal distribution of stock prices, meaning continuously compounded returns are normally distributed:

\ln S_T \sim N\left( \ln S_0 + \left( \mu - \frac{\sigma^2}{2} \right)T, \sigma^2 T \right)

where ( \mu ) is the expected return (replaced by ( r ) in risk-neutral pricing).2

Key Assumptions

The model rests on idealised conditions for mathematical tractability:

  • Efficient markets with no arbitrage and continuous trading.1,3
  • Log-normal asset returns (prices cannot go negative).2,3
  • Constant risk-free rate ( r ) and volatility ( \sigma ).3
  • No dividends (original version; later adjusted by replacing ( S_0 ) with ( S_0 e^{-qT} ) for continuous dividend yield ( q ), or subtracting present value of discrete dividends).2,3
  • No transaction costs, taxes, or short-selling restrictions; frictionless trading with a risky asset (stock) and riskless asset (bond).1,3
  • European exercise only (no early exercise).1,5

These enable delta hedging: dynamically adjusting a portfolio of the underlying asset and riskless bond to replicate the option’s payoff, making its price unique.1

Extensions and Limitations

  • Dividends: Adjust ( S_0 ) to ( S_0 - PV(\text{dividends}) ) or use yield ( q ).2
  • American options: Use Black’s approximation, taking the maximum of European prices with/without dividends.2
  • Greeks: Measures sensitivities like delta (\Delta = N(d_1)), vega (volatility sensitivity), etc., for risk management.4
    Limitations include real-world violations (e.g., volatility smiles, jumps, stochastic rates), but it remains foundational for derivatives trading, valuation (e.g., 409A for startups), and extensions like binomial models.3,5,7

Best Related Strategy Theorist: Myron Scholes

Myron Scholes (b. 1941) is the most directly linked theorist, co-creator of the model and Nobel laureate whose work revolutionised options trading and risk management strategies.

Biography

Born in Timmins, Ontario, Canada, Scholes earned a BA (1962), MA (1964), and PhD (1969) in finance from the University of Chicago, studying under Nobel winners like Merton Miller. He taught at MIT (1968–1972, collaborating with Fischer Black and Robert Merton), Stanford (1973–1996), and later Oxford. In 1990, he co-founded Long-Term Capital Management (LTCM), a hedge fund using advanced models (including Black-Scholes variants) for fixed-income arbitrage, which amassed $4.7 billion in assets before collapsing in 1998 due to leverage and Russian debt crisis—prompting a $3.6 billion Federal Reserve bailout. Scholes received the 1997 Nobel Prize in Economics (shared with Merton; Black deceased), cementing his legacy. He now advises at Platinum Grove Asset Management and philanthropically supports education.1

Relationship to the Term

Scholes co-authored the seminal 1973 paper “The Pricing of Options and Corporate Liabilities” with Fischer Black (1938–1995), an economist at Arthur D. Little and later Goldman Sachs, who conceived the core hedging insight but died before the Nobel. Robert C. Merton (b. 1944, Merton’s 1973 paper extended it to dividends and American options) formalised continuous-time aspects, earning co-credit. Their breakthrough—published amid nascent options markets (CBOE opened 1973)—enabled risk-neutral pricing and dynamic hedging, transforming derivatives from speculative to hedgeable instruments. Scholes’ strategic insight: options prices reflect volatility alone under no-arbitrage, powering strategies like volatility trading, portfolio insurance, and structured products at banks/hedge funds. LTCM exemplified (and exposed limits of) scaling these via leverage.1,2,5

 

References

1. https://en.wikipedia.org/wiki/Black%E2%80%93Scholes_model

2. https://analystprep.com/study-notes/frm/part-1/valuation-and-risk-management/the-black-scholes-merton-model/

3. https://carta.com/learn/startups/equity-management/black-scholes-model/

4. https://www.columbia.edu/~mh2078/FoundationsFE/BlackScholes.pdf

5. https://www.sofi.com/learn/content/what-is-the-black-scholes-model/

6. https://gregorygundersen.com/blog/2024/09/28/black-scholes/

7. https://corporatefinanceinstitute.com/resources/derivatives/black-scholes-merton-model/

8. https://www.youtube.com/watch?v=EEM2YBzH-2U

9. https://www.khanacademy.org/economics-finance-domain/core-finance/derivative-securities/black-scholes/v/introduction-to-the-black-scholes-formula

 

"The Black-Scholes model (or Black-Scholes-Merton model) is a fundamental mathematical formula that calculates the theoretical fair price of European-style options, using inputs like the underlying stock price, strike price, time to expiration, risk-free interest rate and volatility." - Term: Black Scholes

read more
Quote: Reid Hoffman – LinkedIn co-founder

Quote: Reid Hoffman – LinkedIn co-founder

“The fastest way to change yourself is to hang out with people who are already the way you want to be.” – Reid Hoffman – LinkedIn co-founder

Reid Hoffman, best known as the co-founder of LinkedIn, has spent his career at the intersection of technology, networks and human potential. His work is grounded in a deceptively simple observation: who you spend time with fundamentally shapes who you become. This quote, popularised through his book The Startup of You: Adapt to the Future, Invest in Yourself, and Transform Your Career, distils a central theme in his thinking – that careers and identities are not fixed paths, but evolving ventures built in relationship with others.2

Reid Hoffman: from philosopher to founder

Born in 1967 in California, Reid Hoffman studied at Stanford University, focusing on symbolic systems, a multidisciplinary programme that combines computer science, linguistics, philosophy and cognitive psychology. He later pursued a masters degree in philosophy at Oxford, with a particular interest in how individuals and societies create meaning and institutions. That philosophical grounding is visible in the way he talks about networks, trust and social systems, and in his tendency to move quickly from product features to questions of ethics and social impact.

Hoffman initially imagined becoming an academic, but he concluded that entrepreneurship offered a more direct way to shape the world. After early roles at Apple and Fujitsu, he founded his first company, SocialNet, in the late 1990s. It was an ambitious attempt at an online social platform before the wider market was ready. The experience taught him, by his own account, about timing, product-market fit and the brutal realities of execution. Those lessons would later inform his investment philosophy and his advice to founders.

He joined PayPal in its early days, becoming one of the core members of what later came to be known as the “PayPal Mafia”. As executive vice president responsible for business development, he helped navigate the company through growth, regulatory challenges and its eventual acquisition by eBay. This period sharpened his understanding of scaling networks, managing hypergrowth and building resilient organisational cultures. It also cemented his personal network with future founders of Tesla, SpaceX, Yelp, YouTube and Palantir, among others – a living demonstration of his own quote about proximity to people who embody the future you want to be part of.

In 2002, Hoffman co-founded LinkedIn, a professional networking platform that would come to dominate global online professional identity. The idea was radical at the time: that CVs could become living, networked artefacts; that careers could be navigated not just through internal company ladders but through visible webs of relationships; and that trust in business could be mediated through reputation signals and endorsements. LinkedIn grew steadily rather than explosively, reflecting Hoffmans view that durable networks are built on cumulative trust, not just viral growth. The platform embodies the logic of his quote: it is structurally designed to make it easier to find and connect with people whose careers, skills and values you aspire to emulate.2

After LinkedIn scaled and eventually sold to Microsoft, Hoffman became a partner at Greylock Partners, one of Silicon Valleys most established venture capital firms. There he focused on early-stage technology companies, particularly those with strong network effects. He also launched the podcast Masters of Scale, where he interviews founders and leaders about how they built their organisations. The show reinforces the same message: personal and organisational change rarely happens in isolation; it occurs in communities, teams and ecosystems that stretch what people believe is possible.

Context of the quote: The Startup of You and career as a startup

The quote appears in the context of Hoffmans book The Startup of You, co-authored with Ben Casnocha. In the book he argues that every individual, not just entrepreneurs, should think of themselves as the CEO of their own career, applying the mindset and tools of a startup to their working life. That means:

  • Adapting continuously to change rather than relying on a single, static career plan.
  • Investing in relationships as core professional assets, not peripheral extras.
  • Running small experiments to test new directions, skills and opportunities.
  • Building a “networked intelligence” – using the perspectives of others to navigate uncertainty.2

Within that framework, the quote about hanging out with people who are already the way you want to be is not a throwaway line. It is a strategy. Hoffman argues that exposing yourself to people who embody the skills, attitudes and standards you aspire to accelerates learning in several ways:

  • It normalises behaviours that previously felt aspirational or out of reach.
  • It provides a live reference model for decision-making, not just abstract advice.
  • It reinforces identity shifts – you start to see yourself as part of a community where certain behaviours are standard.
  • It opens doors to opportunities that flow along relationship lines.

In other words, the fastest way to change yourself is not merely to decide differently, but to embed yourself in different networks. This reflects Hoffmans broader belief that networks are not just social graphs; they are engines for personal transformation.

The idea behind the quote: why people shape who we become

The deeper logic behind Hoffmans quote sits at the convergence of several strands of research and theory about how human beings change:

  • We internalise norms and expectations from our groups and reference communities.
  • Identity is co-created in interaction with others, not just chosen privately.
  • Behaviours spread through networks via imitation, modelling and subtle social cues.
  • Access to information, opportunities and challenges is heavily mediated by relationships.

Hoffmans framing is distinctly practical. Rather than focusing on abstract self-improvement, he suggests a leverage point: choose your environment and your companions with intent. If you want to become more entrepreneurial, spend time with founders. If you want to become more disciplined, work alongside people who treat discipline as a norm. If you want a more global perspective, immerse yourself in networks that think and operate globally.

This is not, in his usage, about social climbing or mimicry. It is about recognising that the most powerful behavioural technologies we have are other people, and aligning ourselves with those whose example pulls us towards our better, more ambitious selves.

Related thinkers: how theory supports Hoffmans insight

Though Hoffmans quote arises from his own experience in technology and entrepreneurship, the underlying idea is echoed across psychology, sociology, economics and network science. A number of leading theorists and researchers provide a rich backstory to the principle that the people around us are key drivers of personal change.

1. Social learning and modelling – Albert Bandura

Albert Bandura, one of the most influential psychologists of the 20th century, developed social learning theory and the concept of self-efficacy. He showed that people learn new behaviours by observing others, especially when those others are perceived as competent, similar or high-status. In his famous Bobo doll experiments, children who saw adults behaving aggressively towards a doll were more likely to imitate that behaviour.

Bandura argued that much of human learning is vicarious. We watch, internalise and then reproduce behaviours without needing to experience all the consequences ourselves. In that light, Hoffmans advice to spend time with people who are already the way you want to be is essentially a prescription to leverage social modelling in your favour: choose role models and peer groups whose behaviour you want to absorb, because you will absorb it, consciously or not.

Banduras notion of self-efficacy – the belief in ones capability to achieve goals – is also relevant. Seeing people like you succeed in domains you care about, or live in ways you aspire to, is one of the strongest sources of increased self-efficacy. It tells you, implicitly: this is possible, and it may be possible for you.

2. Social comparison and reference groups – Leon Festinger

Leon Festinger, a social psychologist, introduced social comparison theory in the 1950s. He proposed that individuals evaluate their own opinions and abilities by comparing themselves with others, particularly when objective standards are absent or ambiguous. Reference groups – the people we implicitly choose as benchmarks – shape our sense of what counts as success, effort or normality.

Hoffmans quote can be read as deliberate reference-group engineering. If you choose a reference group made up of people who are already living or behaving in ways you admire, then your internal comparisons will continually pull you in that direction. Your standard of “normal” shifts upward. Over time, subtle adjustments in expectations, goals and self-assessment accumulate into substantive change.

3. Social networks and contagion – Nicholas Christakis and James Fowler

In their work on social contagion, Nicholas Christakis and James Fowler used large-scale longitudinal data to show that behaviours and states – from obesity to smoking, happiness and loneliness – can spread through social networks across multiple degrees of separation. If a friend of your friend becomes obese, for instance, your own likelihood of weight gain measurably changes, even if you never meet that intermediary person.

Their research suggests that networks do not merely reflect individual traits; they actively participate in shaping them. Norms, emotions and behaviours travel across the ties between people. In that sense, Hoffmans counsel is aligned with a network-science perspective: by embedding yourself in networks populated by people with the traits you seek, you are positioning yourself in the path of favourable social contagion.

4. Social capital and weak ties – Mark Granovetter and Robert Putnam

Mark Granovetters seminal work on “The Strength of Weak Ties” showed that weak connections – acquaintances rather than close friends – are disproportionately important for accessing new information, opportunities and perspectives. They bridge different clusters within a network and act as conduits between otherwise separated groups.

Robert Putnam, in his work on social capital, differentiated between bonding capital (strong ties within a close group) and bridging capital (ties that connect us across different groups). Bridging capital is particularly valuable for innovation and change, because it exposes individuals to unfamiliar norms, skills and possibilities.

Hoffmans own career illustrates these principles. His decision to join and later invest in networks of founders, technologists and global business leaders gave him an unusually rich set of weak and strong ties. When he advises people to spend time with those who already are how they want to be, he is, in effect, recommending the intentional cultivation of high-quality social capital in domains that matter for your growth.

5. Identity and habit change – James Clear, Charles Duhigg and behavioural science

Contemporary writers on habits and behaviour, such as James Clear and Charles Duhigg, synthesise research from psychology and behavioural economics to explain why environment and identity are so crucial in change. They emphasise that:

  • Habits are heavily shaped by context and cues.
  • We tend to adopt the habits of the groups we belong to.
  • Sustained change often follows a shift in identity – a new answer to the question “Who am I?”

Clear, for example, argues that “the people you surround yourself with are a reflection of who you are, or who you want to be” – an idea strongly resonant with Hoffmans quote. Belonging to a group where a desired behaviour is normal lowers the friction of doing that behaviour yourself. You become the kind of person who does these things, because that is what “people like us” do.

Hoffman extends this line of thought into the professional realm: if you want to be the sort of person who takes intelligent risks, builds companies or adapts well to technological change, put yourself in communities where those behaviours are routine, admired and expected.

6. Deliberate practice and expert communities – K. Anders Ericsson

K. Anders Ericsson, known for his work on expert performance and deliberate practice, showed that world-class performance is rarely a product of raw talent alone. It depends on structured, effortful practice over time, typically supported by coaches, mentors and high-level peer groups. Elite performers tend to train in environments where excellence is normalised and where feedback is rapid, precise and demanding.

Viewed through this lens, Hoffmans quote points to the importance of expert communities for accelerating growth. Being around people who are already operating at the level you aspire to does more than inspire; it enables a more rigorous, feedback-rich form of practice. It shrinks the gap between aspiration and reality by surrounding you with tangible exemplars and high expectations.

7. Entrepreneurial ecosystems – AnnaLee Saxenian and cluster theory

Research on regional innovation systems and entrepreneurial ecosystems, such as AnnaLee Saxenians work on Silicon Valley, illuminates how geographic and social concentration of talent drives innovation. Silicon Valley became uniquely productive not just because of capital or universities, but because it created dense networks of engineers, founders, investors and service providers who interacted constantly, shared norms and recycled experience across companies.

Hoffmans career is intertwined with this ecosystem logic. His own network, forged through PayPal, LinkedIn and Greylock, reflects the power of clusters where people who already embody entrepreneurial behaviours interact daily. When he advises others to “hang out” with people who are already how they want to be, he is, in effect, recommending that individuals build their own personal micro-ecosystems of aspiration, whether or not they live in Silicon Valley.

The personal strategy embedded in the quote

Hoffmans quote can serve as a practical checklist for personal and professional growth:

  • Clarify the change you want – skills, mindset, values, level of responsibility or kind of impact.
  • Identify living examples – people who already embody that change, ideally at different stages and in different contexts.
  • Shift your time allocation – invest more time in conversations, projects and communities with those people and less in environments that reinforce your old patterns.
  • Contribute, not just consume – add value to those relationships; become useful to the people you want to learn from.
  • Allow your identity to update – notice when you start to see yourself as part of a new tribe and let that guide your choices.

For Hoffman, the network is not a backdrop to personal change; it is the primary medium through which change happens. His own journey – from philosopher to entrepreneur, from founder to investor and public intellectual – unfolded through successive communities of people who were already operating in the ways he wanted to learn. The quote captures that lived experience in a single, portable principle: to change yourself at speed, change who you are with.

References

1. https://quotefancy.com/quote/1241059/Reid-Hoffman-The-fastest-way-to-change-yourself-is-to-hang-out-with-people-who-are

2. https://www.goodreads.com/quotes/11473244-the-fastest-way-to-change-yourself-is-to-hang-out

3. https://www.azquotes.com/quote/520979

“The fastest way to change yourself is to hang out with people who are already the way you want to be.” - Quote: Reid Hoffman

read more
Quote: Satya Nadella – CEO, Microsoft

Quote: Satya Nadella – CEO, Microsoft

“Just imagine if your firm is not able to embed the tacit knowledge of the firm in a set of weights in a model that you control… you’re leaking enterprise value to some model company somewhere.” – Satya Nadella – CEO, Microsoft

Satya Nadella’s assertion about enterprise sovereignty represents a fundamental reorientation in how organisations must think about artificial intelligence strategy. Speaking at the World Economic Forum in Davos in January 2026, the Microsoft CEO articulated a principle that challenges conventional wisdom about data protection and corporate control in the AI age. His argument centres on a deceptively simple but profound distinction: the location of data centres matters far less than the ability of a firm to encode its unique organisational knowledge into AI models it owns and controls.

The Context of Nadella’s Intervention

Nadella’s remarks emerged during a high-profile conversation with Laurence Fink, CEO of BlackRock, at the 56th Annual Meeting of the World Economic Forum. The discussion occurred against a backdrop of mounting concern about whether the artificial intelligence boom represents genuine technological transformation or speculative excess. Nadella framed the stakes explicitly: “For this not to be a bubble, by definition, it requires that the benefits of this are much more evenly spread.” The conversation with Fink, one of the world’s most influential voices on capital allocation and corporate governance, provided a platform for Nadella to articulate what he termed “the topic that’s least talked about, but I feel will be most talked about in this calendar year”-the question of firm sovereignty in an AI-driven economy.

The timing of this intervention proved significant. By early 2026, the initial euphoria surrounding large language models and generative AI had begun to encounter practical constraints. Organisations worldwide were grappling with the challenge of translating AI capabilities into measurable business outcomes. Nadella’s contribution shifted the conversation from infrastructure and model capability to something more fundamental: the strategic imperative of organisational control over AI systems that encode proprietary knowledge.

Understanding Tacit Knowledge and Enterprise Value

Central to Nadella’s argument is the concept of tacit knowledge-the accumulated, often uncodified understanding that emerges from how people work together within an organisation. This includes the informal processes, institutional memory, decision-making heuristics, and domain expertise that distinguish one firm from another. Nadella explained this concept by reference to what firms fundamentally do: “it’s all about the tacit knowledge we have by working as people in various departments and moving paper and information.”

The critical insight is that this tacit knowledge represents genuine competitive advantage. When a firm fails to embed this knowledge into AI models it controls, that advantage leaks away. Instead of strengthening the organisation’s position, the firm becomes dependent on external model providers-what Nadella termed “leaking enterprise value to some model company somewhere.” This dependency creates a structural vulnerability: the organisation’s competitive differentiation becomes hostage to the capabilities and pricing decisions of third-party AI vendors.

Nadella’s framing inverts the conventional hierarchy of concerns about AI governance. Policymakers and corporate security teams have traditionally prioritised data sovereignty-ensuring that sensitive information remains within national or corporate boundaries. Nadella argues this focus misses the more consequential question. The physical location of data centres, he stated bluntly, is “the least important thing.” What matters is whether the firm possesses the capability to translate its distinctive knowledge into proprietary AI models.

The Structural Transformation of Information Flow

Nadella’s argument gains force when situated within his broader analysis of how AI fundamentally restructures organisations. He described AI as creating “a complete inversion of how information is flowing in the organisation.” Traditional corporate hierarchies operate through vertical information flows: data and insights move upward through departments and specialisations, where senior leaders synthesise information and make decisions that cascade downward.

AI disrupts this architecture. When knowledge workers gain access to what Nadella calls “infinite minds”-the ability to tap into vast computational reasoning power-information flows become horizontal and distributed. This flattening of hierarchies creates both opportunity and risk. The opportunity lies in accelerated decision-making and the democratisation of analytical capability. The risk emerges when organisations fail to adapt their structures and processes to this new reality. More critically, if firms cannot embed their distinctive knowledge into models they control, they lose the ability to shape how this new information flow operates within their own context.

This structural transformation explains why Nadella emphasises what he calls “context engineering.” The intelligence layer of any AI system, he argues, “is only as good as the context you give it.” Organisations must learn to feed their proprietary knowledge, decision frameworks, and domain expertise into AI systems in ways that amplify rather than replace human judgment. This requires not merely deploying off-the-shelf models but developing the organisational capability to customise and control AI systems around their specific knowledge base.

The Sovereignty Framework: Beyond Geography

Nadella’s reconceptualisation of sovereignty represents a significant departure from how policymakers and corporate leaders have traditionally understood the term. Geopolitical sovereignty concerns have dominated discussions of AI governance-questions about where data is stored, which country’s regulations apply, and whether foreign entities can access sensitive information. These concerns remain legitimate, but Nadella argues they address a secondary question.

True sovereignty in the AI era, by his analysis, means the ability of a firm to encode its competitive knowledge into models it owns and controls. This requires three elements: first, the technical capability to train and fine-tune AI models on proprietary data; second, the organisational infrastructure to continuously update these models as the firm’s knowledge evolves; and third, the strategic discipline to resist the temptation to outsource these capabilities to external vendors.

The stakes of this sovereignty question extend beyond individual firms. Nadella frames it as a matter of enterprise value creation and preservation. When firms leak their tacit knowledge to external model providers, they simultaneously transfer the economic value that knowledge generates. Over time, this creates a structural advantage for the model companies and a corresponding disadvantage for the organisations that depend on them. The firm becomes a consumer of AI capability rather than a creator of competitive advantage through AI.

The Legitimacy Challenge and Social Permission

Nadella’s argument about enterprise sovereignty connects to a broader concern he articulated about AI’s long-term viability. He warned that “if we are not talking about health outcomes, education outcomes, public sector efficiency, private sector competitiveness, we will quickly lose the social permission to use scarce energy to generate tokens.” This framing introduces a crucial constraint: AI’s continued development and deployment depends on demonstrable benefits that extend beyond technology companies and their shareholders.

The question of firm sovereignty becomes relevant to this legitimacy challenge. If AI benefits concentrate among a small number of model providers whilst other organisations become dependent consumers, the technology risks losing public and political support. Conversely, if firms across the economy develop the capability to embed their knowledge into AI systems they control, the benefits of AI diffuse more broadly. This diffusion becomes the mechanism through which AI maintains its social licence to operate.

Nadella identified “skilling” as the limiting factor in this diffusion process. How broadly people across organisations develop capability in AI determines how quickly benefits spread. This connects directly to the sovereignty question: organisations that develop internal capability to control and customise AI systems create more opportunities for their workforce to develop AI skills. Those that outsource AI to external providers create fewer such opportunities.

Leading Theorists and Intellectual Foundations

Nadella’s argument draws on and extends several streams of organisational and economic theory. The concept of tacit knowledge itself originates in the work of Michael Polanyi, the Hungarian-British polymath who argued in his 1966 work The Tacit Dimension that “we know more than we can tell.” Polanyi distinguished between explicit knowledge-information that can be codified and transmitted-and tacit knowledge, which resides in practice, experience, and embodied understanding. This distinction proved foundational for subsequent research on organisational learning and competitive advantage.

Building on Polanyi’s framework, scholars including David Teece and Ikujiro Nonaka developed theories of how organisations create and leverage knowledge. Teece’s concept of “dynamic capabilities”-the ability of firms to integrate, build, and reconfigure internal and external competencies-directly parallels Nadella’s argument about embedding tacit knowledge into AI models. Nonaka’s research on knowledge creation in Japanese firms emphasised the importance of converting tacit knowledge into explicit forms that can be shared and leveraged across organisations. Nadella’s argument suggests that AI models represent a new mechanism for this conversion: translating tacit organisational knowledge into explicit algorithmic form.

The concept of “firm-specific assets” in strategic management theory also underpins Nadella’s reasoning. Scholars including Edith Penrose and later resource-based theorists argued that competitive advantage derives from assets and capabilities that are difficult to imitate and specific to particular organisations. Nadella extends this logic to the AI era: the ability to embed firm-specific knowledge into proprietary AI models becomes itself a firm-specific asset that generates competitive advantage.

More recently, scholars studying digital transformation and platform economics have grappled with questions of control and dependency. Researchers including Shoshana Zuboff have examined how digital platforms concentrate power and value by controlling the infrastructure through which information flows. Nadella’s argument about enterprise sovereignty can be read as a response to these concerns: organisations must develop the capability to control their own AI infrastructure rather than becoming dependent on platform providers.

The concept of “information asymmetry” from economics also illuminates Nadella’s argument. When firms outsource AI to external providers, they create information asymmetries: the model provider possesses detailed knowledge of how the firm’s data and knowledge are being processed, whilst the firm itself may lack transparency into the model’s decision-making processes. This asymmetry creates both security risks and strategic vulnerability.

Practical Implications and Organisational Change

Nadella’s argument carries significant implications for how organisations should approach AI strategy. Rather than viewing AI primarily as a technology to be purchased from external vendors, firms should conceptualise it as a capability to be developed internally. This requires investment in three areas: technical infrastructure for training and deploying models; talent acquisition and development in machine learning and data science; and organisational redesign to align workflows with how AI systems operate.

The last point proves particularly important. Nadella emphasised that “the mindset we as leaders should have is, we need to think about changing the work-the workflow-with the technology.” This represents a significant departure from how many organisations have approached technology adoption. Rather than fitting new technology into existing workflows, organisations must redesign workflows around how AI operates. This includes flattening information hierarchies, enabling distributed decision-making, and creating feedback loops through which AI systems continuously learn from organisational experience.

Nadella also introduced the concept of a “barbell adoption” strategy. Startups, he noted, adapt easily to AI because they lack legacy systems and established workflows. Large enterprises possess valuable assets and accumulated knowledge but face significant change management challenges. The barbell approach suggests that organisations should pursue both paths simultaneously: experimenting with new AI-native processes whilst carefully managing the transition of legacy systems.

The Measurement Challenge: Tokens per Dollar per Watt

Nadella introduced a novel metric for evaluating AI’s economic impact: “tokens per dollar per watt.” This metric captures the efficiency with which organisations can generate computational reasoning power relative to energy consumption and cost. The metric reflects Nadella’s argument that AI’s economic value depends not on the sophistication of models but on how efficiently organisations can deploy and utilise them.

This metric also connects to the sovereignty question. Organisations that control their own AI infrastructure can optimise this metric for their specific needs. Those dependent on external providers must accept the efficiency parameters those providers establish. Over time, this difference in optimisation capability compounds into significant competitive advantage.

The Broader Economic Transformation

Nadella situated his argument about enterprise sovereignty within a broader analysis of how AI transforms economic structure. He drew parallels to previous technological revolutions, particularly the personal computing era. Steve Jobs famously described the personal computer as a “bicycle for the mind”-a tool that amplified human capability. Bill Gates spoke of “information at your fingertips.” Nadella argues that AI represents these concepts “10x, 100x” more powerful.

However, this amplification of capability only benefits organisations that can control how it operates within their context. When firms outsource AI to external providers, they forfeit the ability to shape how this amplification occurs. They become consumers of capability rather than creators of competitive advantage.

Nadella’s vision of AI diffusion requires what he terms “ubiquitous grids of energy and tokens”-infrastructure that makes AI capability as universally available as electricity. However, this infrastructure alone proves insufficient. Organisations must also develop the internal capability to embed their knowledge into AI systems. Without this capability, even ubiquitous infrastructure benefits only those firms that control the models running on it.

Conclusion: Knowledge as the New Frontier

Nadella’s argument represents a significant reorientation in how organisations should think about AI strategy and competitive advantage. Rather than focusing on data location or infrastructure ownership, firms should prioritise their ability to embed proprietary knowledge into AI models they control. This shift reflects a deeper truth about how AI creates value: not through raw computational power or data volume, but through the ability to translate organisational knowledge into algorithmic form that amplifies human decision-making.

The sovereignty question Nadella articulated-whether firms can embed their tacit knowledge into models they control-will likely prove central to AI strategy for years to come. Organisations that develop this capability will preserve and enhance their competitive advantage. Those that outsource this capability to external providers risk gradually transferring their distinctive knowledge and the value it generates to those providers. In an era when AI increasingly mediates how organisations operate, the ability to control the models that encode organisational knowledge becomes itself a fundamental source of competitive advantage and strategic sovereignty.

References

1. https://www.teamday.ai/ai/satya-nadella-davos-ai-diffusion-larry-fink

2. https://dig.watch/event/world-economic-forum-2026-at-davos/conversation-with-satya-nadella-ceo-of-microsoft

3. https://www.youtube.com/watch?v=zyNWbPBkq6E

4. https://www.youtube.com/watch?v=1co3zt3-r7I

5. https://www.theregister.com/2026/01/21/nadella_ai_sovereignty_wef/

6. https://fortune.com/2026/01/20/is-ai-a-bubble-satya-nadella-microsoft-ceo-new-knowledge-worker-davos-fink/

"Just imagine if your firm is not able to embed the tacit knowledge of the firm in a set of weights in a model that you control... you're leaking enterprise value to some model company somewhere." - Quote: Satya Nadella - CEO, Microsoft

read more
Term: Jagged Edge of AI

Term: Jagged Edge of AI

“The “jagged edge of AI” refers to the inconsistent and uneven nature of current artificial intelligence, where models excel at some complex tasks (like writing code) but fail surprisingly at simpler ones, creating unpredictable performance gaps that require human oversight.” – Jagged Edge of AI

The “jagged edge” or “jagged frontier of AI” is the uneven boundary of current AI capability, where systems are superhuman at some tasks and surprisingly poor at others of seemingly similar difficulty, producing erratic performance that cannot yet replace human judgement and requires careful oversight.4,7

At this jagged edge, AI models can:

  • Excel at tasks like reading, coding, structured writing, or exam-style reasoning, often matching or exceeding expert-level performance.1,2,7
  • Fail unpredictably on tasks that appear simpler to humans, especially when they demand robust memory, context tracking, strict rule-following, or real-world common sense.1,2,4

This mismatch has several defining characteristics:

  • Jagged capability profile
    AI capability does not rise smoothly; instead, it forms a “wall with towers and recesses” – very strong in some directions (e.g. maths, classification, text generation), very weak in others (e.g. persistent memory, reliable adherence to constraints, nuanced social judgement).2,3,4
    Researchers label this pattern the “jagged technological frontier”: some tasks are easily done by AI, while others, though seemingly similar in difficulty, lie outside its capability.4,7

  • Sensitivity to small changes
    Performance can swing dramatically with minor changes in task phrasing, constraints, or context.4
    A model that handles one prompt flawlessly may fail when the instructions are reordered or slightly reworded, which makes behaviour hard to predict without systematic testing.

  • Bottlenecks and “reverse salients”
    The jagged shape creates bottlenecks: single weak spots (such as memory or long-horizon planning) that limit what AI can reliably automate, even when its raw intelligence looks impressive.2
    When labs solve one such bottleneck – a reverse salient – overall capability can suddenly lurch forward, reshaping the frontier while leaving new jagged edges elsewhere.2

  • Implications for work and organisation design
    Because capability is jagged, AI tends not to uniformly improve or replace jobs; instead it supercharges some tasks and underperforms on others, even within the same role.6,7
    Field experiments with consultants show large productivity and quality gains on tasks inside the frontier, but far less help – or even harm – on tasks outside it.7
    This means roles evolve towards managing and orchestrating AI across these edges: humans handle judgement, context, and exception cases, while AI accelerates pattern-heavy, structured work.2,4,6

  • Need for human oversight and “AI literacy”
    Because the frontier is jagged and shifting, users must continuously probe and map where AI is trustworthy and where it is brittle.4,8
    Effective use therefore requires AI literacy: knowing when to delegate, when to double-check, and how to structure workflows so that human review covers the weak edges while AI handles its “sweet spot” tasks.4,6,8

In strategic and governance terms, the jagged edge of AI is the moving boundary where:

  • AI is powerful enough to transform tasks and workflows,
  • but uneven and unpredictable enough that unqualified automation is risky,
  • creating a premium on hybrid human–AI systems, robust guardrails, and continuous testing.1,2,4

Strategy theorist: Ethan Mollick and the “Jagged Frontier”

The strategist most closely associated with the jagged edge/frontier of AI in practice and management thinking is Ethan Mollick, whose work has been pivotal in defining how organisations should navigate this uneven capability landscape.2,3,4,7

Relationship to the concept

  • The phrase “jagged technological frontier” originates in a field experiment by Dell’Acqua, Mollick, Ransbotham and colleagues, which analysed how generative AI affects the work of professional consultants.4,7
  • In that paper, they showed empirically that AI dramatically boosts performance on some realistic tasks while offering little benefit or even degrading performance on others, despite similar apparent difficulty – and they coined the term to capture that boundary.7
  • Mollick then popularised and extended the idea in widely read essays such as “Centaurs and Cyborgs on the Jagged Frontier” and later pieces on the shape of AI, jaggedness, bottlenecks, and salients, bringing the concept into mainstream management and strategy discourse.2,3,4

In his writing and teaching, Mollick uses the “jagged frontier” to:

  • Argue that jobs are not simply automated away; instead, they are recomposed into tasks that AI does, tasks that humans retain, and tasks where human–AI collaboration is superior.2,3
  • Introduce the metaphors of “centaurs” (humans and AI dividing tasks) and “cyborgs” (tightly integrated human–AI workflows) as strategies for operating on this frontier.3
  • Emphasise that the jagged shape creates both opportunities (rapid acceleration of some activities) and constraints (persistent need for human oversight and design), which leaders must explicitly map and manage.2,3,4

In this sense, Mollick functions as a strategy theorist of the jagged edge: he connects the underlying technical phenomenon (uneven capability) with organisational design, skills, and competitive advantage, offering a practical framework for firms deciding where and how to deploy AI.

Biography and relevance to AI strategy

  • Academic role
    Ethan Mollick is an Associate Professor of Management at the Wharton School of the University of Pennsylvania, specialising in entrepreneurship, innovation, and the impact of new technologies on work and organisations.7
    His early research focused on start-ups, crowdfunding and innovation processes, before shifting towards generative AI and its effects on knowledge work, where he now runs some of the most cited field experiments.

  • Research on AI and work
    Mollick has co-authored multiple studies examining how generative AI changes productivity, quality and inequality in real jobs.
    In the “Navigating the Jagged Technological Frontier” experiment, his team placed consultants in realistic tasks with and without AI and showed that:

  • For tasks inside AI’s frontier, consultants using AI were more productive (12.2% more tasks, 25.1% faster) and produced over 40% higher quality output.7

  • For tasks outside the frontier, the benefits were weaker or absent, highlighting the risk of over-reliance where AI is brittle.7
    This empirical demonstration is central to the modern understanding of the jagged edge as a strategic boundary rather than a purely technical curiosity.

  • Public intellectual and practitioner bridge
    Through his “One Useful Thing” publication and executive teaching, Mollick translates these findings into actionable guidance for leaders, including:

  • How to design workflows that align with AI’s jagged profile,

  • How to structure human–AI collaboration modes, and

  • How to build organisational capabilities (training, policies, experimentation) to keep pace as the frontier moves.2,3,4

  • Strategic perspective
    Mollick frames the jagged frontier as a continuously shifting strategic landscape:

  • Companies that map and exploit the protruding “towers” of AI strength can gain significant productivity and innovation advantages.

  • Those that ignore or misread the “recesses” – the weak edges – risk compliance failures, reputational harm, or operational fragility when they automate tasks that still require human judgement.2,4,7

For organisations grappling with the jagged edge of AI, Mollick’s work offers a coherent strategy lens: treat AI not as a monolithic capability but as a jagged, moving frontier; build hybrid systems that respect its limits; and invest in human skills and structures that can adapt as that edge advances and reshapes.

References

1. https://www.salesforce.com/blog/jagged-intelligence/

2. https://www.oneusefulthing.org/p/the-shape-of-ai-jaggedness-bottlenecks

3. https://www.oneusefulthing.org/p/centaurs-and-cyborgs-on-the-jagged

4. https://libguides.okanagan.bc.ca/c.php?g=743006&p=5383248

5. https://edrm.net/2024/10/navigating-the-ai-frontier-balancing-breakthroughs-and-blind-spots/

6. https://drphilippahardman.substack.com/p/defining-and-navigating-the-jagged

7. https://www.hbs.edu/faculty/Pages/item.aspx?num=64700

8. https://daedalusfutures.com/latest/f/life-at-the-jagged-edge-of-ai

"The "jagged edge of AI" refers to the inconsistent and uneven nature of current artificial intelligence, where models excel at some complex tasks (like writing code) but fail surprisingly at simpler ones, creating unpredictable performance gaps that require human oversight." - Term: Jagged Edge of AI

read more
Quote: Aesop – Greek fabulist

Quote: Aesop – Greek fabulist

“No act of kindness, no matter how small, is ever wasted.” – Aesop – Greek fabulist

The line is commonly attributed to Aesop, the semi-legendary Greek teller of fables whose brief animal stories have shaped moral thinking for over two millennia.1 The quotation crystallises a theme that runs through his work: that modest gestures, offered without calculation, can alter destinies – and that significance is rarely proportional to size.

The phrase is most often linked to one of his best-known fables, The Lion and the Mouse. In the story, a mighty lion captures a frightened mouse who has unwittingly disturbed his sleep. Amused by the tiny creature’s pleas for mercy, the lion chooses to spare her rather than eat her. Later, the lion himself is caught in a hunter’s net. Hearing his roars, the mouse remembers the earlier kindness, gnaws through the ropes, and frees him. The moral traditionally drawn has several layers: power should not despise weakness; help may come from unexpected quarters; and, above all, what looks like an insignificant kindness can return at a moment when everything depends upon it.1,3

Like many lines associated with Aesop, the wording we use today is a smooth, modern paraphrase rather than a verbatim translation from ancient Greek. The fables were transmitted orally and then written down, edited and re-edited over centuries, so exact phrasing shifts with language and era. What endures is the moral insight: that kindness carries a durable value of its own. Even when it is not repaid by the original recipient, it may ripple outward, change someone else’s course, or simply refine the character of the giver.

Aesop: life, legend and the making of a moralist

Almost everything about Aesop is enveloped in a mixture of scattered references, later biographies and literary tradition. Ancient sources generally agree on a few core points. He is said to have lived in the 6th century BC, during the Archaic period of Greek history, and to have been a slave who became famous for his storytelling.3 Accounts place his origins variously in Phrygia, Thrace, Samos or Lydia. The poet Herodotus mentions an Aesop in passing, and later authors, especially the semi-fictional Life of Aesop, embroider his biography with colourful episodes: his wit in outmanoeuvring masters, his travels to the courts of rulers, and his sharp, satirical use of fables to criticise hypocrisy and injustice.

The precise historical Aesop is hard to reconstruct; scholars widely believe that many of the fables now grouped under his name are the work of multiple anonymous fabulists, collected and attributed to him over time. Yet the persona of Aesop – a socially marginal figure whose insight cuts through pretension – is part of the power of the tradition. The idea that a man of low status, possibly foreign and enslaved, could offer enduring ethical guidance suited stories in which small animals correct great beasts and apparent weakness turns into moral authority.

Aesop’s fables are typically brief, often no more than a paragraph, and end with a concise moral: “slow and steady wins the race”, “look before you leap”, “better safe than sorry”. The dramatis personae are usually animals with human traits: proud lions, cunning foxes, diligent ants, foolish crows. The form allows hard truths about pride, greed, cruelty and folly to be voiced at a safe distance. A king may not welcome a direct rebuke, but he can chuckle at the misfortunes of a boastful crow and still absorb the point.

Within this tradition, the kindness of the lion in sparing the mouse is striking because it seems gratuitous. There is no expectation of return; indeed the lion laughs at the idea that such a puny creature could ever repay him. The reversal, when the mouse becomes the saviour, underlines a countercultural message in hierarchic societies: do not dismiss the small. Value may lie where power does not.

Kindness in the Aesopic imagination

The fable behind the quote is not unique in celebrating generosity, mercy and reciprocity. Across the Aesopic corpus, we find recurring patterns:

  • The reversal of expectations: small animals outwit or rescue large ones; the poor prove more hospitable than the rich; the apparently foolish reveal deeper wisdom. This elevates kindness from a sentimental theme to a quiet subversion of conventional rankings.
  • Pragmatic ethics: kindness is rarely abstract. It appears in concrete actions – sharing food, offering protection, warning of danger, forgiving offences – often framed as both morally right and, in the long run, prudent.
  • Moral memory: characters remember both kindnesses and wrongs. The mouse’s recollection of the lion’s mercy is central to the story’s impact. The fables assume that moral actions plant seeds in the social world, germinating later in unpredictable ways.

In this light, “No act of kindness, no matter how small, is ever wasted” becomes less a comforting phrase and more a concise reading of how a moral economy operates. Some acts of generosity will be repaid directly, others indirectly; some may shape the character of the giver rather than the fate of the receiver. But none is meaningless. Each contributes to a network of obligations, examples and stories that make cooperation and trust more thinkable.

From oral tale to ethical tradition

Aesop’s fables spread widely in the classical world, used by philosophers, rhetoricians and educators. By the time of the Roman Empire, authors such as Phaedrus and later Babrius were adapting and versifying the tales into Latin and Greek. In late antiquity and the Middle Ages, Christian writers folded them into sermons and exempla, appreciating their ability to cloak serious moral lessons in accessible narratives.

With the advent of print in Europe, Aesopic material was gathered into influential collections. Erasmus of Rotterdam recommended the fables for schooling, seeing in them a resource for both grammar and virtue. In the 17th century, the French poet Jean de La Fontaine reworked many Aesopic plots into elegant French verse, overlaying classical structures with the social observation and courtly wit of Louis XIV’s France. La Fontaine’s Fables became a key text in French culture, and their portrayals of vanity, power and injustice often retain the Aesopic device of seemingly small characters revealing truths ignored by the mighty.

In England, translators and moralists produced their own Aesop editions, frequently aimed at children. Here, the line between folklore and formal moral education blurred: nursery reading, religious instruction and civic virtues converged around stock morals like the one encapsulated in this quote on kindness. Over time, specific phrases, once simple glosses of a story’s lesson, took on an independent life as freestanding aphorisms.

Kindness, reciprocity and moral psychology

Aesop wrote long before the emergence of modern philosophy, social science or psychology, yet his intuition that small kind acts are not wasted finds echoes in later theoretical work on reciprocity, altruism and moral development. Several strands are particularly relevant.

Hobbes, Hume and the sentiment of benevolence

In the 17th century, Thomas Hobbes portrayed human beings as driven largely by self-interest and fear, needing strong authority to keep mutual aggression in check. On this view, kindness risks looking naive unless grounded in prudent calculation. However, even Hobbes conceded that humans seek reputation and that cooperative behaviour can be instrumentally rational; there is room here for the idea that acts of generosity, even small ones, help build the trust on which stable society depends.

By contrast, 18th-century moral sentimentalists, especially David Hume and Adam Smith, argued that we are naturally equipped with feelings of sympathy or fellow-feeling. Hume emphasised that we take pleasure in the happiness of others and discomfort in their suffering, while Smith’s notion of the “impartial spectator” highlights our capacity to imagine how our conduct appears to an objective observer. In such frameworks, a small kindness is far from wasted: it responds to and reinforces dispositions at the heart of our moral life. It also trains our own sensibilities, making us more attuned to the needs and perspectives of others.

Kant and the duty of beneficence

Immanuel Kant, writing in the late 18th century, approached morality through duty rather than sentiment. For him, there is a categorical imperative to treat others never merely as means but always also as ends. From this flows a duty of beneficence: to further the ends of others where one can. In Kantian terms, a small act of kindness honours the rational agency and dignity of the other person. Its worth does not depend on its consequences; the moral law is fulfilled even if the act appears to yield no tangible return. Here, too, “no act of kindness is wasted” because its ethical value lies in the alignment of the agent’s will with duty, not in the size of the outcome.

Utilitarianism and the calculus of small benefits

19th-century utilitarians such as Jeremy Bentham and John Stuart Mill evaluated actions in terms of their contributions to overall happiness. From a utilitarian angle, small acts of kindness matter precisely because happiness and suffering are often composed of many minor experiences. A kind word, a small favour or a moment of consideration can marginally improve someone’s well-being; aggregated across societies and over time, such increments are far from trivial.

Later utilitarians have explored how “low-cost, high-benefit” acts – such as sharing information, making introductions, or providing minor assistance – form the micro-foundations of cooperative systems. What looks, from the actor’s perspective, like an almost costless kindness can, in the right context, unlock disproportionately large positive effects.

Game theory, reciprocity and indirect returns

In the 20th century, game theory and the study of cooperation added formal structure to Aesop’s intuition. Work by theorists such as Robert Axelrod on repeated prisoner’s dilemma games showed that strategies embodying conditional cooperation – being kind or cooperative initially, and reciprocating others’ behaviour thereafter – can be highly effective in sustaining stable, mutually beneficial relationships.

Experiments and models of indirect reciprocity suggest that helping someone can improve one’s reputation with third parties, who may in turn be more inclined to help the original benefactor. In this sense, an apparently “wasted” act – say, assisting a stranger one will never meet again – can still generate returns via social perception and norms. The mouse’s rescue of the lion is a vivid narrative analogue of these abstract dynamics.

Evolutionary perspectives on altruism

Biologists and evolutionary theorists, including figures such as William Hamilton and later Robert Trivers, explored how cooperation and altruistic behaviour could evolve. Concepts like kin selection, reciprocal altruism and group selection provide mechanisms by which helping behaviour can be favoured by natural selection, especially when benefits to recipients (discounted by relatedness or likelihood of reciprocation) exceed costs to givers.

In this framework, small acts of kindness can be seen as low-cost signals of cooperative intent, fostering trust and potentially triggering reciprocal help. The lion and the mouse, of course, are anthropomorphic characters rather than biological models, but the story dramatises a pattern: generosity can create allies out of potential nonentities.

Moral development and the education of kindness

In the 20th century, psychologists such as Jean Piaget and Lawrence Kohlberg studied how children’s moral reasoning matures, while later researchers in developmental psychology examined the roots of empathy and prosocial behaviour. Experiments with very young children show early forms of spontaneous helping and sharing; socialisation then shapes how these impulses are expressed and regulated.

Narratives like Aesop’s fables play an important role here. They provide simplified contexts in which consequences of actions are clear and moral stakes are stark. A child hearing the tale of the lion and the mouse is invited to see mercy not as weakness but as a risk that pays off, and to understand that size and status do not determine worth. The tag-line about no kindness being wasted condenses that lesson into a maxim that can be carried into everyday encounters.

Kindness in modern ethics and social thought

Recent moral philosophy has, in some strands, given renewed attention to the character of the moral agent rather than just rules or consequences. Virtue ethics, drawing on Aristotle and revived by thinkers such as Elizabeth Anscombe and Philippa Foot, considers traits like generosity, compassion and kindness as central excellences of personhood. On this view, individual kind acts are not isolated events but expressions of a stable disposition, cultivated through habit.

At the same time, care ethics, developed notably by Carol Gilligan and Nel Noddings, highlights the moral centrality of attending to particular others in their vulnerability and dependence. The spotlight falls on the often invisible labour of caring, listening and supporting – many of the very small acts that Aesop’s maxim invites us to see as meaningful.

Social theorists and economists examining social capital also pick up related themes. Trust, norms of reciprocity and informal networks of help underpin effective institutions and resilient communities. A culture in which people habitually extend small kindnesses – returning lost items, offering directions, making allowances for others’ mistakes – tends to enjoy higher levels of trust and lower transaction costs. From this macro perspective, each micro kindness again appears far from wasted; it marginally strengthens the fabric on which shared life depends.

A timeless lens on everyday conduct

Placed in its full context, Aesop’s line is more than a gentle encouragement. It is the distilled wisdom of a tradition that has observed, with unsentimental clarity, how societies actually work. Power fluctuates; fortunes reverse; the weak become strong and the strong, weak. Status blinds; pride isolates. In such a world, the small, uncalculated kindness – offered to those who cannot compel it and may never repay it – turns out to be a surprisingly robust investment.

The lion did not spare the mouse because a cost-benefit analysis predicted future rescue. He did so as an expression of what it means to be magnanimous. The mouse did not free the lion because she had signed a contract; she responded out of gratitude and loyalty. The story implies that such acts are never wasted because they participate in a deeper moral order, one in which character, memory and relationship weigh more than immediate gain.

Aesop’s genius lay in noticing that these truths can be taught most effectively not through abstract argument but through stories that lodge in the imagination. The aphorism “No act of kindness, no matter how small, is ever wasted” is a modern summation of that lesson – a reminder that, in a world often preoccupied with scale and spectacle, the quiet decision to be kind retains a significance that far exceeds its size.

References

1. https://philosiblog.com/2014/02/28/no-act-of-kindness-no-matter-how-small-is-ever-wasted/

2. https://www.passiton.com/inspirational-quotes/6666-no-act-of-kindness-no-matter-how-small-is

3. https://www.quotationspage.com/quote/24014.html

4. https://www.randomactsofkindness.org/kindness-quotes/127-no-act-of-kindness-no

5. https://friendsofwords.com/2021/07/19/no-act-of-kindness-no-matter-how-small-is-ever-wasted-aesop-meaning/

"No act of kindness, no matter how small, is ever wasted." - Quote: Aesop

read more
Quote: Kristalina Georgieva – Managing Director, IMF

Quote: Kristalina Georgieva – Managing Director, IMF

“What is being eliminated [by AI] are often tasks done by new entries into the labor force – young people. Conversely, people with higher skills get better pay, spend more locally, and that ironically increases demand for low-skill jobs. This is bad news for recent … graduates.” – Kristalina Georgieva – Managing Director, IMF

Kristalina Georgieva, Managing Director of the International Monetary Fund (IMF), delivered this stark observation during a World Economic Forum Town Hall in Davos on 23 January 2026, amid discussions on ‘Dilemmas around Growth’. Speaking as AI’s rapid adoption accelerates, she highlighted a dual dynamic: the elimination of routine entry-level tasks traditionally filled by young graduates, coupled with productivity gains for higher-skilled workers that paradoxically boost demand for low-skill service roles.1,2,5

Context of the Quote

Georgieva’s remarks form part of the IMF’s latest research, which estimates that AI will impact 40% of global jobs and 60% in advanced economies through enhancement, elimination, or transformation.1,3 She described AI as a ‘tsunami hitting the labour market’, emphasising its immediate effects: one in ten jobs in advanced economies already demands new skills, often IT-related, creating wage pressures on the middle class while entry-level positions vanish.1,2,5 This ‘accordion of opportunities’ sees high-skill workers earning more, spending locally, and sustaining low-skill jobs like hospitality, but leaves recent graduates struggling to enter the workforce.5

Backstory on Kristalina Georgieva

Born in 1953 in Sofia, Bulgaria, Kristalina Georgieva rose from communist-era academia to global economic leadership. She earned a PhD in economic modelling and worked as an economist before Bulgaria’s democratic transition. Joining the World Bank in 1993, she climbed to roles including Chief Economist for Europe and Central Asia, then Commissioner for International Cooperation, Humanitarian Aid, and Crisis Response at the European Commission (2010-2014). Appointed IMF Managing Director in 2019, she navigated the COVID-19 crisis, steering over USD 1 trillion in lending and advocating fiscal resilience. Georgieva’s tenure has focused on inequality, climate finance, and digital transformation, making her a authoritative voice on AI’s socioeconomic implications.3,5

Leading Theorists on AI and Labour Markets

The theoretical foundations of Georgieva’s analysis trace to pioneering economists dissecting technology’s job impacts.

  • David Autor: MIT economist whose ‘task-based framework’ (with Frank Levy) posits jobs as bundles of tasks, some automatable. Autor’s research shows AI targets routine cognitive tasks, polarising labour markets by hollowing out middle-skill roles while boosting high- and low-skill demand-a ‘polarisation’ mirroring Georgieva’s entry-level concerns.3
  • Erik Brynjolfsson and Andrew McAfee: MIT scholars and authors of The Second Machine Age, they argue AI enables ‘recombinant innovation’, automating cognitive work unlike prior mechanisation. Their work warns of ‘winner-takes-all’ dynamics exacerbating inequality without policy interventions like reskilling, aligning with IMF calls for adaptability training.3
  • Daron Acemoglu: MIT Nobel laureate (2024) who, with Pascual Restrepo, models automation’s ‘displacement vs productivity effects’. Their framework predicts AI displaces routine tasks but creates complementary roles; however, without incentives for human-AI collaboration, net job losses loom for low-skill youth.5

These theorists underpin IMF models, stressing that AI’s net employment effect hinges on policy: Northern Europe’s success in ‘learning how to learn’ exemplifies adaptive education over rigid skills training.5

Broader Implications

Georgieva urges proactive measures-reskilling youth, bolstering social safety nets, and regulating AI for inclusivity-to avert deepened inequality. Emerging markets face steeper skills gaps, risking divergence from advanced economies.1,3,5 Her personal embrace of tools like Microsoft Copilot underscores individual agency, yet systemic reform remains essential for equitable growth.

References

1. https://www.businesstoday.in/wef-2026/story/wef-summit-davos-2026-ai-jobs-workers-middle-class-labour-market-imf-kristalina-georgieva-512774-2026-01-24

2. https://fortune.com/2026/01/23/imf-chief-warns-ai-tsunami-entry-level-jobs-gen-z-middle-class/

3. https://globaladvisors.biz/2026/01/23/quote-kristalina-georgieva-managing-director-imf/

4. https://www.youtube.com/watch?v=4ANV7yuaTuA

5. https://www.weforum.org/podcasts/meet-the-leader/episodes/ai-skills-global-economy-imf-kristalina-georgieva/

"What is being eliminated [by AI] are often tasks done by new entries into the labor force - young people. Conversely, people with higher skills get better pay, spend more locally, and that ironically increases demand for low-skill jobs. This is bad news for recent ... graduates." - Quote: Kristalina Georgieva - Managing Director, IMF

read more
Quote: Kristalina Georgieva – Managing Director, IMF

Quote: Kristalina Georgieva – Managing Director, IMF

“Is the labour market ready [for AI] ? The honest answer is no. Our study shows that already in advanced economies, one in ten jobs require new skills.” – Kristalina Georgieva – Managing Director, IMF

Kristalina Georgieva, Managing Director of the International Monetary Fund (IMF), delivered this stark assessment during a World Economic Forum town hall in Davos in January 2026, amid discussions on growth dilemmas in an AI-driven era1,3,4. Her words underscore the IMF’s latest research revealing that artificial intelligence is already reshaping labour markets, with immediate implications for employment and skills development worldwide5.

Who is Kristalina Georgieva?

Born in 1953 in Bulgaria, Kristalina Georgieva rose through the ranks of international finance with a career marked by economic expertise and crisis leadership. Holding a PhD in economic modelling from Sofia University, she began at the World Bank in 1993, eventually becoming Chief Executive Officer of its Science and Technology division. She served as European Commission Vice-President for Budget and Human Resources from 2014 to 2016, and as CEO of the World Bank Group from 2017. Appointed IMF Managing Director in 2019, she navigated the institution through the COVID-19 pandemic, the global inflation surge, and geopolitical shocks, advocating for fiscal resilience and inclusive growth3,5. Georgieva’s tenure has emphasised data-driven policy, particularly on technology’s societal impacts, making her a pivotal voice on AI’s economic ramifications1.

The Context of the Quote

Spoken at the WEF 2026 Town Hall on ‘Dilemmas around Growth’, the quote reflects IMF analysis showing AI affecting 40% of global jobs-enhanced, eliminated, or transformed-with 60% in advanced economies3,4. Georgieva highlighted that in advanced economies, one in ten jobs already requires new skills, often IT-related, creating supply shortages5. She likened AI’s impact on entry-level roles to a ‘tsunami’, warning of heightened risks for young workers and graduates as routine tasks vanish1,2. Despite productivity gains-potentially boosting global growth by 0.1% to 0.8%-uneven distribution exacerbates inequality, with low-income countries facing only 20-26% exposure yet lacking adaptation infrastructure4.

Leading Theorists on AI and Labour Markets

The IMF’s task-based framework draws from foundational work by economists like David Autor, who pioneered the ‘task approach’ in labour economics. Autor’s research, with co-authors like Frank Levy, posits that jobs consist of discrete tasks, some automatable (routine cognitive or manual) and others not (non-routine creative or interpersonal). AI, unlike prior automation targeting physical routines, encroaches on cognitive tasks, polarising labour markets by hollowing out middle-skill roles3.

Erik Brynjolfsson and Andrew McAfee, MIT scholars and authors of Race Against the Machine (2011) and The Second Machine Age (2014), argue AI heralds a ‘qualitative shift’, automating high-skill analytical work previously safe from machines. Their studies predict widened inequality without intervention, as gains accrue to capital owners and superstars while displacing median workers. Recent IMF-aligned research echoes this, noting AI’s dual potential for productivity surges and job reshaping3,5.

Other influencers include Carl Benedikt Frey and Michael Osborne, whose 2013 Oxford study estimated 47% of US jobs at high automation risk, catalysing global discourse. Their work influenced IMF models, emphasising reskilling urgency3. Georgieva advocates policies inspired by these theorists: massive investment in adaptable skills-‘learning how to learn’-as seen in Nordic models like Finland and Sweden, where flexibility buffers disruption5. Data shows a 1% rise in new skills correlates with 1.3% overall employment growth, countering fears of net job loss5.

Broader Implications

Georgieva’s warning arrives amid economic fragmentation-trade tensions, US-China rivalry, and sluggish productivity (global growth at 3.3% versus pre-pandemic 3.8%)5. AI could reverse this if harnessed equitably, but demands proactive measures: reskilling for vulnerable youth, social protections, and regulatory frameworks to distribute gains. Advanced economies must lead, while supporting emerging markets to avoid an ‘accordion of opportunities’-expanding in the rich world, contracting elsewhere4. Her call to action is clear: policymakers and businesses must use IMF insights to prepare, not react.

References

1. https://fortune.com/2026/01/23/imf-chief-warns-ai-tsunami-entry-level-jobs-gen-z-middle-class/

2. https://timesofindia.indiatimes.com/education/careers/news/ai-is-hitting-entry-level-jobs-like-a-tsunami-imf-chief-kristalina-georgieva-urges-students-to-prepare-for-change/articleshow/127381917.cms

3. https://globaladvisors.biz/2026/01/23/quote-kristalina-georgieva-managing-director-imf/

4. https://www.weforum.org/stories/2026/01/live-from-davos-2026-what-to-know-on-day-2/

5. https://www.weforum.org/podcasts/meet-the-leader/episodes/ai-skills-global-economy-imf-kristalina-georgieva/

"Is the labour market ready [for AI] ? The honest answer is no. Our study shows that already in advanced economies, one in ten jobs require new skills." - Quote: Kristalina Georgieva - Managing Director, IMF

read more
Term: Vibe coding

Term: Vibe coding

“Vibe coding is an AI-driven software development approach where users describe desired app features in natural language (the “vibe”), and a Large Language Model (LLM) generates the functional code.” – Vibe coding

Vibe coding is an AI-assisted software development technique where developers describe project goals or features in natural language prompts to a large language model (LLM), which generates the source code; the developer then evaluates functionality through testing and iteration without reviewing, editing, or fully understanding the code itself.1,2

This approach, distinct from traditional AI pair programming or code assistants, emphasises “giving in to the vibes” by focusing on outcomes, rapid prototyping, and conversational refinement rather than code structure or correctness.1,3 Developers act as prompters, guides, testers, and refiners, shifting from manual implementation to high-level direction—e.g., instructing an LLM to “create a user login form” for instant code generation.2 It operates in two levels: a tight iterative loop for refining specific code via feedback, and a broader lifecycle from concept to deployed app.2

Key characteristics include:

  • Natural language as input: Builds on the idea that “the hottest new programming language is English,” bypassing syntax knowledge.1
  • No code inspection: Accepting AI output blindly, verified only by execution results—programmer Simon Willison notes that reviewing code makes it mere “LLM as typing assistant,” not true vibe coding.1
  • Applications: Ideal for prototypes (e.g., Andrej Karpathy’s MenuGen), proofs-of-concept, experimentation, and automating repetitive tasks; less suited for production without added review.1,3
  • Comparisons to traditional coding:
Feature Traditional Programming Vibe Coding
Code Creation Manual line-by-line AI-generated from prompts2
Developer Role Architect, implementer, debugger Prompter, tester, refiner2,3
Expertise Required High (languages, syntax) Lower (functional goals)2
Speed Slower, methodical Faster for prototypes2
Error Handling Manual debugging Conversational feedback2
Maintainability Relies on skill and practices Depends on AI quality and testing2,3

Tools supporting vibe coding include Google AI Studio for prompt-to-app prototyping, Firebase Studio for app blueprints, Gemini Code Assist for IDE integration, GitHub Copilot, and Microsoft offerings—lowering barriers for non-experts while boosting pro efficiency.2,3 Critics highlight risks like unmaintainable code or security issues in production, stressing the need for human oversight.3,6

Best related strategy theorist: Andrej Karpathy. Karpathy coined “vibe coding” in February 2025 via a widely shared post, describing it as “fully giv[ing] in to the vibes, embrac[ing] exponentials, and forget[ting] that the code even exists”—exemplified by his MenuGen prototype, built entirely via LLM prompts with natural language feedback.1 This built on his 2023 claim that English supplants programming languages due to LLM prowess.1

Born in 1986 in Bratislava, Czechoslovakia (now Slovakia), Karpathy earned a BSc in Physics and Computer Science from University of British Columbia (2009), followed by an MSc (2011) and PhD (2015) in Computer Science from University of Toronto under Geoffrey Hinton, a neural networks pioneer. His doctoral work advanced recurrent neural networks (RNNs) for sequence modelling, including char-RNN for text generation.1 Post-PhD, he was a research scientist at Stanford (2015), then Director of AI at Tesla (2017–2022), leading Autopilot vision—scaling ConvNets to massive video data for self-driving cars. In 2023, he co-founded OpenAI’s Supercluster team for GPT training infrastructure before departing in 2024 to launch Eureka Labs (AI education) and advise AI firms.1,3 Karpathy’s career embodies scaling AI paradigms, making vibe coding a logical evolution: from low-level models to natural language commanding complex software, democratising development while embracing AI’s “exponentials.”1,2,3

References

1. https://en.wikipedia.org/wiki/Vibe_coding

2. https://cloud.google.com/discover/what-is-vibe-coding

3. https://news.microsoft.com/source/features/ai/vibe-coding-and-other-ways-ai-is-changing-who-can-build-apps-and-how/

4. https://www.ibm.com/think/topics/vibe-coding

5. https://aistudio.google.com/vibe-code

6. https://stackoverflow.blog/2026/01/02/a-new-worst-coder-has-entered-the-chat-vibe-coding-without-code-knowledge/

7. https://uxplanet.org/i-tested-5-ai-coding-tools-so-you-dont-have-to-b229d4b1a324

"Vibe coding is an AI-driven software development approach where users describe desired app features in natural language (the "vibe"), and a Large Language Model (LLM) generates the functional code." - Term: Vibe coding

read more
Quote: Gen-Z disillusion – Fortune Magazine

Quote: Gen-Z disillusion – Fortune Magazine

“One-third of Gen Z says they believe they’ll never be able to pay off their debt, and more than half believe they’ll never own a home.” – Fortune Magazine – January 2026

The observation that “one-third of Gen Z says they believe they’ll never be able to pay off their debt, and more than half believe they’ll never own a home” captures a profound shift in how an entire generation understands risk, reward and the social contract. It is not only a comment on personal pessimism; it is a snapshot of structural change in advanced economies, where the pathways that once linked effort to security appear increasingly broken for those now entering adulthood.

Generation Z – typically defined as those born from the late 1990s to the early 2010s – came of age in the long shadow of the global financial crisis, the COVID-19 pandemic and a decade of asset inflation that dramatically enriched existing owners while raising the drawbridge on those outside. Many of them watched parents endure job losses, foreclosures or long periods of stagnant pay. They arrived in the labour market as housing costs, tuition, healthcare and everyday essentials outpaced wages, and as credit – rather than income growth – became the central tool for keeping households afloat.

That background matters because Gen Z’s sense that debt is unpayable and homeownership unreachable is not an abstract mood; it is grounded in observable economic patterns. Surveys in the mid-2020s repeatedly show that young adults are more indebted relative to their earnings than earlier cohorts at the same age, more reliant on high-interest credit and less likely to hold the one form of debt – a mortgage – that traditionally builds long-term wealth. Analyses of US data, for instance, note that Gen Z consumers are far more likely to hold revolving credit card balances and personal loans while having low rates of homeownership, reflecting the way credit is being used to manage short-term survival rather than long-term investment.1,2

Homeownership sits at the centre of this story. In the post-war era, policy, tax systems and urban planning in many advanced economies were implicitly designed around the assumption that each generation would become homeowners earlier and at higher rates than the last. Property was framed as both a consumption good and the primary asset for retirement security. For Gen Z, that script has inverted. Young adults face a combination of historically high house-price-to-income ratios, elevated mortgage rates and large required deposits in many cities. Surveys in the mid-2020s suggest that a majority of Gen Z respondents doubt they will ever own a home, even though most say they would like to.3,5

The result is a psychological stance some commentators have dubbed “disillusionomics”: a way of thinking about money shaped by the belief that traditional milestones – owning a house, clearing debts, building a pension – are not realistically attainable on normal wages within a normal working life. Instead, Gen Z is often reported to be experimenting with alternative strategies: multiple income streams, gig work, high-risk investing, side hustles and very short planning horizons. They are also more willing to challenge inherited financial norms, questioning whether homeownership is still a rational goal or whether the effort required is simply disproportionate to the reward in a world of fragile employment and volatile asset prices.3

Debt sits at the heart of this generational fracture. Earlier generations embraced borrowing as a bridge to a better future: a mortgage bought a home that would appreciate; student loans were justified as an investment in higher lifetime earnings; consumer credit smoothed consumption as incomes rose. In contrast, many Gen Z borrowers experience debt as a trap rather than a lever. Credit is often used to cover basic living costs, not discretionary luxuries, and is serviced at interest rates that erode the possibility of saving a deposit or building a cushion. Surveys show worrying levels of delinquency among younger borrowers, as well as a growing share who say they carry more in debt than they hold in savings or liquid assets.1,3,5

This collision of rising costs, precarious work and expensive credit shapes their expectations. If monthly obligations already absorb most of their paycheque, it is rational for a young adult to conclude that a future mortgage deposit – perhaps requiring many tens of thousands in savings – is beyond reach. If they also doubt that their real wages will grow significantly over time, the idea that they can ever fully clear their debts appears equally implausible. The quote, therefore, is less about personal fatalism and more about a generation doing the arithmetic and finding that the numbers do not add up.

The changing idea of the “American Dream” and homeownership

The anxiety around homeownership for Gen Z must be understood within the longer history of the so-called American Dream and its equivalents in other advanced economies. After the Second World War, policy makers in the United States, the United Kingdom and elsewhere promoted mass homeownership as the cornerstone of middle-class life. Subsidised mortgages, tax advantages and large-scale suburban building programmes all worked to make ownership more accessible to industrial-era workers. Over time, however, the financialisation of housing turned property itself into a speculative asset class.

From the 1980s onward, deregulated credit markets, falling interest rates and global capital flows drove house prices up faster than incomes in many urban centres. Those who already owned property enjoyed capital gains; those who did not saw the ladder pulled further away. This dynamic was magnified after the global financial crisis, when ultra-low interest rates and quantitative easing again raised asset prices, particularly in housing, while wage growth remained weak. By the time Gen Z reached adulthood, the entry cost into the housing market in many cities had become historically high relative to average earnings.

Young people, facing this landscape, must decide whether to accept decades of austerity to chase a property purchase that may still be vulnerable to shocks, or to reorient their aspirations away from ownership entirely. Some surveys highlight that younger homeowners place a stronger emphasis on achieving “debt freedom” than on expanding into larger or more prestigious homes, reflecting a reframing of success away from accumulation and towards autonomy from lenders.8

Why this generation feels different: work, wages and volatility

Beyond housing, Gen Z’s relationship with work and income is shaped by instability. Many entered the labour market during or just after the pandemic, facing hiring freezes, remote onboarding and an unstable demand for entry-level roles. The rise of gig platforms and freelance contracting has created new opportunities but also shifted more risk onto individuals, who often lack benefits, sick pay or predictable hours.

At the same time, inflation spikes in the early 2020s eroded real wages just as rents and mortgage costs jumped. Younger workers, who tend to have lower starting salaries and fewer buffers, were hit hardest. Statistical analyses show that workers under 35 often earn substantially less than older cohorts, yet face similar or higher living costs, leaving less margin to repay debts or accumulate savings.4

Cultural responses to this squeeze have been widely reported. Concepts such as “doom spending” – the choice to spend now because the future feels too uncertain to save for – and “quiet quitting” reflect broader scepticism about delayed gratification in a system perceived as unbalanced. When asset ownership feels unattainable, the moral weight once attached to thrift and long-term planning is diminished. The logic becomes: if the system will not reward sacrifice with security, why sacrifice at all?

Intellectual backstory: debt, generations and the social contract

The sentiment encapsulated in the quote sits at the intersection of several major strands of thought: the political economy of debt, the sociology of generations and the analysis of asset-based inequality. Over the past half-century, a number of theorists and researchers have helped explain why a generation could come to view debt as permanent and ownership as implausible.

Debt as power and promise: from Graeber to financialisation theorists

The late anthropologist David Graeber drew attention to the deep moral and political dimensions of debt. In his influential work on the history of obligations, he argued that debt has long functioned as a tool of social control as much as an economic instrument. Modern consumer and student debt, in this view, discipline individuals to accept certain forms of work and life choices in order to stay current on their obligations. For Gen Z, whose entry to adulthood is defined by outstanding balances rather than accruing assets, this disciplinary function is acute: the need to service debt can constrain job mobility, entrepreneurship and even decisions about family formation.

Financialisation scholars have added a structural dimension to this story. Writers on the shift from an industrial to a financialised economy emphasise how profits have increasingly flowed from financial activities – including household lending – rather than from wages and production. Households, especially younger ones, are encouraged to become both borrowers and investors, taking on leverage to access housing and education while being exposed to financial market volatility. For those who arrive late to this system, such as Gen Z, the upside of asset inflation is limited, while the downside of inflated entry prices and heavy leverage is very real.

Intergenerational inequality: Piketty, asset owners and the young

Economist Thomas Piketty and colleagues have reshaped contemporary debate about inequality by documenting the long-run tendency for returns to capital to exceed the growth rate of the economy. When this happens, those who already own capital – including housing – see their wealth grow faster than overall output, while those reliant on labour income fall behind. For a generation born after asset prices had already been inflated by decades of such dynamics, the chances of catching up through work alone are slim.

Subsequent research has shown that wealth gaps between younger and older cohorts have widened significantly. The median young adult today typically holds far less net wealth than their counterparts did several decades ago at the same age, after adjusting for inflation. Much of this gap reflects property ownership. Older cohorts often bought homes when price-to-income ratios were lower and subsequently enjoyed price appreciation; younger ones confront elevated prices and must borrow more heavily relative to their incomes or exit the market altogether.

Life-courses under strain: sociologists of youth and precarity

Sociologists of youth and work have long studied how the transition from education to stable employment has become more fractured. Concepts such as “precarity” capture the rise of insecure work, fragmented careers and uncertain futures. Instead of a linear progression from school to a permanent job, to homeownership and family, many young adults experience looping paths, temporary contracts, and frequent sector changes.

This has consequences for how they view long-term commitments like mortgages. If you cannot be confident about your income five years from now, committing to a 25- or 30-year debt contract looks very different than it did to earlier generations with stronger expectations of continuous employment. The growing sense that careers are unpredictable weakens the appeal of the traditional wealth-building strategy of buying and paying down a fixed home loan.

Behavioural economists and the psychology of “no way out”

Behavioural economics adds another layer by explaining how people respond to overwhelming burdens. Research on present bias and scarcity suggests that when individuals feel permanently behind, they focus on immediate needs and relief rather than distant goals. In the context of Gen Z, heavy debt loads and high living costs leave little mental or financial bandwidth for retirement saving or long-term home purchase planning.

Studies on financial behaviour among younger consumers highlight a mix of caution and risk-taking: caution in the form of distrust of institutions, and risk-taking in high-volatility investments or speculative trades seen as the only routes to rapid advancement. The belief that conventional paths will not deliver – reflected in the quote – encourages some to either disengage from traditional financial planning altogether or to seek extraordinary upside via risky strategies. Both responses reinforce volatility in outcomes.

Housing economists and the end of automatic homeownership

Housing economists have been documenting for years how structural shifts have eroded the assumption that each cohort will own at higher rates than the previous one. They note the interaction of land-use restrictions, sluggish building in high-demand areas, demographic pressures, foreign capital inflows and speculative investment in property as an asset class. These factors collectively push up prices relative to local wages, particularly in attractive urban centres where many skilled jobs for Gen Z are located.

Work in this field has also shown how credit interacts with housing supply. Easier access to mortgage credit does not simply make housing more affordable; when supply is constrained, it can bid up prices instead. Over several decades, expanded mortgage availability without commensurate increases in housing stock contributed to higher entry prices. Younger buyers respond by either taking on higher loan-to-income mortgages – increasing their vulnerability to shocks – or by staying renters indefinitely.

Debt, education and the reshaping of risk

Education finance forms another crucial piece of the backstory. For many Gen Z students, higher education came with substantial tuition fees funded by loans, premised on the belief that a degree would reliably yield higher earnings. However, the combination of crowded graduate labour markets, credential inflation and regional mismatches in job opportunities has undermined this assumption for some. Where graduate salaries do not rise enough to offset accumulated student loans and elevated living costs, the debt-to-income ratio for young workers remains stubbornly high.

At the same time, financial literacy and debt management skills have often lagged behind the proliferation of credit products. Commentators on personal finance education emphasise that many young borrowers are entering adulthood with a complex mix of obligations – student loans, credit cards, personal loans, occasionally buy-now-pay-later schemes – without systematic guidance on prioritising repayments, negotiating with creditors or avoiding high-fee products. As a result, even manageable debts can feel unmanageable, particularly when combined with opaque interest structures and penalty regimes.6

The perception that one-third of a generation expects never to clear their debts is therefore not only about absolute amounts; it is also about opacity and a lack of confidence in the rules of the game. If you cannot easily see a route from your current obligations to a debt-free future, and if you suspect that the system is stacked to prolong your indebtedness, the rational inference is that the debt may be permanent.

Cultural narratives: from aspirational to sceptical

Popular culture both reflects and reinforces these economic realities. Earlier eras were filled with images of young couples buying their first home, steadily trading up and arriving at retirement with a paid-off property and supplementary savings. In contrast, much of Gen Z’s media diet is saturated with stories of financial burnout, housing insecurity, and the impossibility of catching up. Social media amplifies both extremes: displays of ostentatious success, often driven by non-traditional careers, alongside viral testimonies of people unable to afford basic milestones despite working full-time.

This creates a powerful comparative lens. Seeing peers accumulate substantial wealth through entrepreneurship, speculation or influencer careers, while conventional earners struggle to pay rent, can further erode belief in the legitimacy of traditional employment-based advancement. The sense of being “duped” – urged to follow rules that no longer yield the promised results – feeds into the disillusioned stance that the quote expresses.

Rethinking security in a leveraged world

Ultimately, the belief among many Gen Z individuals that they will never pay off their debts or own a home is not merely a reflection of generational temperament; it is a rational assessment of the constraints imposed by an economic model heavily reliant on household leverage and inflated asset values. It highlights fault lines in the implicit bargain that underpinned late 20th-century prosperity: study hard, work hard, borrow prudently, and the system will deliver stability and ownership.

As that bargain has frayed, a generation has been forced to reassess what financial security looks like when ownership is delayed, partial or permanently out of reach. Whether the response takes the form of quiet resignation, radical experimentation with new income models, political mobilisation, or a reimagining of what constitutes a good life without property, the starting point remains the stark insight captured in the quote: when debt feels endless and homeownership implausible, the entire architecture of aspiration must be rebuilt from the ground up.

References

1. https://www.realtor.com/advice/finance/gen-z-homebuying-credit-card-debt/

2. https://www.experian.com/blogs/ask-experian/average-american-debt-by-age/

3. https://fortune.com/2025/12/12/gen-z-giving-up-on-owning-home-spending-more-saving-less-working-less-risky-investments/

4. https://carry.com/learn/average-debt-by-age

5. https://www.scotsmanguide.com/news/two-thirds-of-gen-z-think-they-will-never-own-a-home/

6. https://enrich.org/debt-isnt-the-problem-lack-of-debt-management-education-is/

7. https://www.housingwire.com/articles/the-debt-crisis-among-younger-americans-how-it-is-shaping-homeownership-and-what-lenders-can-do/

8. https://www.kin.com/blog/american-dream-and-homeownership-survey-2025/

9. https://nationalmortgageprofessional.com/news/financial-hurdles-dominate-millennial-homebuying-plans

10. https://www.mpamag.com/us/mortgage-industry/industry-trends/millennial-buyers-weigh-desperate-bids-against-deep-financial-strain-in-2026/561152

"One-third of Gen Z says they believe they'll never be able to pay off their debt, and more than half believe they’ll never own a home." - Quote: Fortune Magazine

read more

Download brochure

Introduction brochure

What we do, case studies and profiles of some of our amazing team.

Download

Our latest podcasts on Spotify

Sign up for our newsletters - free

Global Advisors | Quantified Strategy Consulting