‌
Global Advisors
‌
‌
‌

Our selection of the top business news sources on the web.

AM edition. Issue number 1211

Latest 10 stories. Click the button for more.

Read More
‌
‌
‌

Quote: Fei-Fei Li - Godmother of AI

"In the AI age, trust cannot be outsourced to machines. Trust is fundamentally human. It's at the individual level, community level, and societal level." - Fei-Fei Li - Godmother of AI

The Quote and Its Significance

This statement encapsulates a profound philosophical stance on artificial intelligence that challenges the prevailing techno-optimism of our era. Rather than viewing AI as a solution to human problems-including the problem of trust itself-Fei-Fei Li argues for the irreducible human dimension of trust. In an age where algorithms increasingly mediate our decisions, relationships, and institutions, her words serve as a clarion call: trust remains fundamentally a human endeavour, one that cannot be delegated to machines, regardless of their sophistication.

Who Is Fei-Fei Li?

Fei-Fei Li stands as one of the most influential voices in artificial intelligence research and ethics today. As co-director of Stanford's Institute for Human-Centered Artificial Intelligence (HAI), founded in 2019, she has dedicated her career to ensuring that AI development serves humanity rather than diminishes it. Her influence extends far beyond academia: she was appointed to the United Nations Scientific Advisory Board, named one of TIME's 100 Most Influential People in AI, and has held leadership roles at Google Cloud and Twitter.

Li's most celebrated contribution to AI research is the creation of ImageNet, a monumental dataset that catalysed the deep learning revolution. This achievement alone would secure her place in technological history, yet her impact extends into the ethical and philosophical dimensions of AI development. In 2024, she co-founded World Labs, an AI startup focused on spatial intelligence systems designed to augment human capability-a venture that raised $230 million and exemplifies her commitment to innovation grounded in ethical principles.

Beyond her technical credentials, Li co-founded AI4ALL, a non-profit organisation dedicated to promoting diversity and inclusion in the AI sector, reflecting her conviction that AI's future must be shaped by diverse voices and perspectives.

The Core Philosophy: Human-Centred AI

Li's assertion about trust emerges from a broader philosophical framework that she terms human-centred artificial intelligence. This approach fundamentally rejects the notion that machines should replace human judgment, particularly in domains where human dignity, autonomy, and values are at stake.

In her public statements, Li has articulated a concern that resonates throughout her work: the language we use about AI shapes how we develop and deploy it. She has expressed deep discomfort with the word "replace" when discussing AI's relationship to human labour and capability. Instead, she advocates for framing AI as augmenting or enhancing human abilities rather than supplanting them. This linguistic shift reflects a philosophical commitment: AI should amplify human creativity and ingenuity, not reduce humans to mere task-performers.

Her reasoning is both biological and existential. As she has explained, humans are slower runners, weaker lifters, and less capable calculators than machines-yet "we are so much more than those narrow tasks." To allow AI to define human value solely through metrics of speed, strength, or computational power is to fundamentally misunderstand what makes us human. Dignity, creativity, moral judgment, and relational capacity cannot be outsourced to algorithms.

The Trust Question in Context

Li's statement about trust addresses a critical vulnerability in contemporary society. As AI systems increasingly mediate consequential decisions-from healthcare diagnoses to criminal sentencing, from hiring decisions to financial lending-society faces a temptation to treat these systems as neutral arbiters. The appeal is understandable: machines do not harbour conscious bias, do not tire, and can process vast datasets instantaneously.

Yet Li's insight cuts to the heart of a fundamental misconception. Trust, in her formulation, is not merely a technical problem to be solved through better algorithms or more transparent systems. Trust is a social and moral phenomenon that exists at three irreducible levels:

  • Individual level: The personal relationships and judgments we make about whether to rely on another person or institution
  • Community level: The shared norms and reciprocal commitments that bind groups together
  • Societal level: The institutional frameworks and collective agreements that enable large-scale cooperation

Each of these levels involves human agency, accountability, and the capacity to be wronged. A machine cannot be held morally responsible; a human can. A machine cannot understand the context of a community's values; a human can. A machine cannot participate in the democratic deliberation necessary to shape societal institutions; a human must.

Leading Theorists and Related Intellectual Traditions

Li's thinking draws upon and contributes to several important intellectual traditions in philosophy, ethics, and social theory:

Human Dignity and Kantian Ethics

At the philosophical foundation of Li's work lies a commitment to human dignity-the idea that humans possess intrinsic worth that cannot be reduced to instrumental value. This echoes Immanuel Kant's categorical imperative: humans must never be treated merely as means to an end, but always also as ends in themselves. When AI systems reduce human workers to optimisable tasks, or when algorithmic systems treat individuals as data points rather than moral agents, they violate this fundamental principle. Li's insistence that "if AI applications take away that sense of dignity, there's something wrong" is fundamentally Kantian in its ethical architecture.

Feminist Technology Studies and Care Ethics

Li's emphasis on relationships, context, and the irreducibility of human judgment aligns with feminist critiques of technology that emphasise care, interdependence, and situated knowledge. Scholars in this tradition-including Donna Haraway, Lucy Suchman, and Safiya Noble-have long argued that technology is never neutral and that the pretence of objectivity often masks particular power relations. Li's work similarly insists that AI development must be grounded in explicit values and ethical commitments rather than presented as value-neutral problem-solving.

Social Epistemology and Trust

The philosophical study of trust has been enriched in recent decades by work in social epistemology-the study of how knowledge is produced and validated collectively. Philosophers such as Miranda Fricker have examined how trust is distributed unequally across society, and how epistemic injustice occurs when certain voices are systematically discredited. Li's emphasis on trust at the community and societal levels reflects this sophisticated understanding: trust is not a technical property but a social achievement that depends on fair representation, accountability, and recognition of diverse forms of knowledge.

The Ethics of Artificial Intelligence

Li contributes to and helps shape the emerging field of AI ethics, which includes thinkers such as Stuart Russell, Timnit Gebru, and Kate Crawford. These scholars have collectively argued that AI development cannot be separated from questions of power, justice, and human flourishing. Russell's work on value alignment-ensuring that AI systems pursue goals aligned with human values-provides a technical framework for the philosophical commitments Li articulates. Gebru and Crawford's work on data justice and algorithmic bias demonstrates how AI systems can perpetuate and amplify existing inequalities, reinforcing Li's conviction that human oversight and ethical deliberation remain essential.

The Philosophy of Technology

Li's thinking also engages with classical philosophy of technology, particularly the work of thinkers like Don Ihde and Peter-Paul Verbeek, who have argued that technologies are never mere tools but rather reshape human practices, relationships, and possibilities. The question is not whether AI will change society-it will-but whether that change will be guided by human values or will instead impose its own logic upon us. Li's advocacy for light-handed, informed regulation rather than heavy-handed top-down control reflects a nuanced understanding that technology development requires active human governance, not passive acceptance.

The Broader Context: AI's Transformative Power

Li's emphasis on trust must be understood against the backdrop of AI's extraordinary transformative potential. She has stated that she believes "our civilisation stands on the cusp of a technological revolution with the power to reshape life as we know it." Some experts, including AI researcher Kai-Fu Lee, have argued that AI will change the world more profoundly than electricity itself.

This is not hyperbole. AI systems are already reshaping healthcare, scientific research, education, employment, and governance. Deep neural networks have demonstrated capabilities that surprise even their creators-as exemplified by AlphaGo's unexpected moves in the ancient game of Go, which violated centuries of human strategic wisdom yet proved devastatingly effective. These systems excel at recognising patterns that humans cannot perceive, at scales and speeds beyond human comprehension.

Yet this very power makes Li's insistence on human trust more urgent, not less. Precisely because AI is so powerful, precisely because it operates according to logics we cannot fully understand, we cannot afford to outsource trust to it. Instead, we must maintain human oversight, human accountability, and human judgment at every level where AI affects human lives and communities.

The Challenge Ahead

Li frames the challenge before us as fundamentally moral rather than merely technical. Engineers can build more transparent algorithms; ethicists can articulate principles; regulators can establish guardrails. But none of these measures can substitute for the hard work of building trust-at the individual level through honest communication and demonstrated reliability, at the community level through inclusive deliberation and shared commitment to common values, and at the societal level through democratic institutions that remain responsive to human needs and aspirations.

Her vision is neither techno-pessimistic nor naïvely optimistic. She does not counsel fear or rejection of AI. Rather, she advocates for what she calls "very light-handed and informed regulation"-guardrails rather than prohibition, guidance rather than paralysis. But these guardrails must be erected by humans, for humans, in service of human flourishing.

In an era when trust in institutions has eroded-when confidence in higher education, government, and media has declined precipitously-Li's message carries particular weight. She acknowledges the legitimate concerns about institutional trustworthiness, yet argues that the solution is not to replace human institutions with algorithmic ones, but rather to rebuild human institutions on foundations of genuine accountability, transparency, and commitment to human dignity.

Conclusion: Trust as a Human Responsibility

Fei-Fei Li's statement that "trust cannot be outsourced to machines" is ultimately a statement about human responsibility. In the age of artificial intelligence, we face a choice: we can attempt to engineer our way out of the messy, difficult work of building and maintaining trust, or we can recognise that trust is precisely the work that remains irreducibly human. Li's life's work-from ImageNet to the Stanford HAI Institute to World Labs-represents a sustained commitment to the latter path. She insists that we can harness AI's extraordinary power whilst preserving what makes us human: our capacity for judgment, our commitment to dignity, and our ability to trust one another.

References

1. https://www.hoover.org/research/rise-machines-john-etchemendy-and-fei-fei-li-our-ai-future

2. https://economictimes.com/magazines/panache/stanford-professor-calls-out-the-narrative-of-ai-replacing-humans-says-if-ai-takes-away-our-dignity-something-is-wrong/articleshow/122577663.cms

3. https://www.nisum.com/nisum-knows/top-10-thought-provoking-quotes-from-experts-that-redefine-the-future-of-ai-technology

4. https://www.goodreads.com/author/quotes/6759438.Fei_Fei_Li

"In the AI age, trust cannot be outsourced to machines. Trust is fundamentally human. It’s at the individual level, community level, and societal level." - Quote: Fei-Fei Li

‌

‌

Term: Barrier option

"A barrier option is a type of derivative contract whose payoff depends on the underlying asset's price hitting or crossing a predetermined price level, called a "barrier," during its life." - Barrier option

A barrier option is an exotic, path-dependent option whose payoff and even validity depend on whether the price of an underlying asset hits, crosses, or breaches a specified barrier level during the life of the contract.1,3,6 In contrast to standard (vanilla) European or American options, which depend only on the underlying price at expiry (and, for Americans, the ability to exercise early), barrier options embed an additional trigger condition linked to the price path of the underlying.3,6

Core definition and mechanics

Formally, a barrier option is a derivative contract that grants the holder a right (but not the obligation) to buy or sell an underlying asset at a pre-agreed strike price if, and only if, a separate barrier level has or has not been breached during the option's life.1,3,4,6 The barrier can cause the option to:

  • Activate (knock-in) when breached, or
  • Extinguish (knock-out) when breached.1,2,3,4,5

Key characteristics:

  • Exotic option: Barrier options are classified as exotic because they include more complex features than standard European or American options.1,3,6
  • Path dependence: The payoff depends on the entire price path of the underlying - not just the terminal price at maturity.3,6 What matters is whether the barrier was touched at any time before expiry.
  • Conditional payoff: The option's value or existence is conditional on the barrier event. If the condition is not met, the option may never become active or may cease to exist before expiry.1,2,3,4
  • Over-the-counter (OTC) trading: Barrier options are predominantly customised and traded OTC between institutions, corporates, and sophisticated investors, rather than on standardised exchanges.3

Structural elements

Any barrier option can be described by a small set of structural parameters:

  • Underlying asset: The asset from which value is derived, such as an equity, FX rate, interest rate, commodity, or index.1,3
  • Option type: Call (right to buy) or put (right to sell).3
  • Exercise style: Most barrier options are European-style, exercisable only at expiry. In practice, the barrier monitoring is typically continuous or at defined intervals, even though exercise itself is European.3,6
  • Strike price: The price at which the underlying can be bought or sold if the option is alive at exercise.1,3
  • Barrier level: The critical price of the underlying that, when touched or crossed, either activates or extinguishes the option.1,3,6
  • Barrier direction:
    • Up: Barrier is set above the initial underlying price.
    • Down: Barrier is set below the initial underlying price.3,8
  • Barrier effect:
    • Knock-in: Becomes alive only if the barrier is breached.
    • Knock-out: Ceases to exist if the barrier is breached.1,2,3,4,5
  • Monitoring convention: Continuous monitoring (at all times) or discrete monitoring (at specific dates or times). Continuous monitoring is the canonical case in theory and common in OTC practice.
  • Rebate: An optional fixed (or sometimes functional) payment that may be made if the option is knocked out, compensating the holder partly for the lost optionality.3

Types of barrier options

The main taxonomy combines direction (up/down) with effect (knock-in/knock-out), and applies to either calls or puts.1,2,3,6

1. Knock-in options

Knock-in barrier options are dormant initially and become standard options only if the underlying price crosses the barrier at some point before expiry.1,2,3,4

  • Up-and-in: The option is activated only if the underlying price rises above a barrier set above the initial price.1,2,3
  • Down-and-in: The option is activated only if the underlying price falls below a barrier set below the initial price.1,2,3

Once activated, a knock-in barrier option typically behaves like a vanilla European option with the same strike and expiry. If the barrier is never reached, the knock-in option expires worthless.1,3

2. Knock-out options

Knock-out options are initially alive but are extinguished immediately if the barrier is breached at any time before expiry.1,2,3,4

  • Up-and-out: The option is cancelled if the underlying price rises above a barrier set above the initial price.1,3
  • Down-and-out: The option is cancelled if the underlying price falls below a barrier set below the initial price.1,3

Because the option can disappear before maturity, the premium is typically lower than that of an equivalent vanilla option, all else equal.1,2,3

3. Rebate barrier options

Some barrier structures include a rebate, a pre-specified cash amount that is paid if the barrier condition is (or is not) met. For example, a knock-out option may pay a rebate when it is knocked out, offering partial compensation for the loss of the remaining optionality.3

Path dependence and payoff character

Barrier options are described as path-dependent because their payoff depends on the trajectory of the underlying price over time, not only on its value at expiry.3,6

  • For a knock-in, the central question is: Was the barrier ever touched? If yes, the payoff at expiry is that of the corresponding vanilla option; if not, the payoff is zero (or a rebate if specified).
  • For a knock-out, the question is: Was the barrier ever touched before expiry? If yes, the payoff is zero from that time onwards (again, possibly plus a rebate); if not, the payoff at expiry equals that of a vanilla option.1,3

Because of this path dependence, pricing and hedging barrier options require modelling not just the distribution of the underlying price at maturity, but also the probability of the price path crossing the barrier level at any time before that.3,6

Pricing: connection to Black - Scholes - Merton

The pricing of barrier options, under the classical assumptions of frictionless markets, constant volatility, and lognormal underlying dynamics, is grounded in the Black - Scholes - Merton (BSM) framework. In the BSM world, the underlying price process is often modelled as a geometric Brownian motion:

dS_t = \mu S_t \, dt + \sigma S_t \, dW_t

Under risk-neutral valuation, the drift \mu is replaced by the risk-free rate r, and the barrier option price is the discounted risk-neutral expected payoff. Closed-form expressions are available for many standard barrier structures (e.g. up-and-out or down-and-in calls and puts) under continuous monitoring, building on and extending the vanilla Black - Scholes formula.

The pricing techniques involve:

  • Analytical solutions for simple, continuously monitored barriers with constant parameters, often derived via solution of the associated partial differential equation (PDE) with absorbing or activating boundary conditions at the barrier.
  • Reflection principle methods for Brownian motion, which allow the derivation of hitting probabilities and related terms.
  • Numerical methods (finite differences, Monte Carlo with barrier adjustments, tree methods) for more complex, discretely monitored, or path-dependent variants with time-varying barriers or stochastic volatility.

Relative to vanilla options, barrier options in the BSM model are typically cheaper because the additional condition (activation or extinction) reduces the set of scenarios in which the holder receives the full vanilla payoff.1,2,3

Strategic uses and motives

Barrier options are used across markets where participants either want finely tuned risk protection or to express a conditional view on future price movements.1,2,3,5

1. Cost-efficient hedging

  • Corporates may hedge FX or interest-rate exposures using knock-out or knock-in structures to reduce premiums. For instance, a corporate worried about a sharp depreciation in a currency might buy a down-and-in put that only activates if the exchange rate falls below a critical business threshold, thereby paying less premium than for a plain vanilla put.3
  • Investors may use barrier puts to protect against tail-risk events while accepting no protection for moderate moves, again in exchange for a lower upfront cost.

2. Targeted speculation

  • Barrier options allow traders to express conditional views: for example, that an asset will rally, but only after breaking through a resistance level, or that a decline will occur only if a support level is breached.2,3
  • Up-and-in calls or down-and-in puts are often used to express such conditional breakout scenarios.

3. Structuring and yield enhancement

  • Barrier options are a staple ingredient in structured products offered by banks to clients seeking yield enhancement with contingent downside or upside features.
  • For example, a range accrual, reverse convertible, or autocallable note may incorporate barriers that determine whether coupons are paid or capital is protected.

Risk characteristics

Barrier options introduce specific risks beyond those of standard options:

  • Gap risk and jump risk: If the underlying price jumps across the barrier between monitoring times or overnight, the option may be suddenly knocked in or out, creating discontinuous changes in value and hedging exposure.
  • Model risk: Pricing relies heavily on assumptions about volatility, barrier monitoring, and the nature of price paths. Mis-specification can lead to significant mispricing.
  • Hedging complexity: Because payoff and survival depend on path, the option's sensitivity (delta, gamma, vega) can change abruptly as the underlying approaches the barrier. This makes hedging more complex and costly compared with vanilla options.
  • Liquidity risk: OTC nature and customisation mean secondary market liquidity is often limited.3

Barrier options and the Black - Scholes - Merton lineage

The natural theoretical anchor for barrier options is the Black - Scholes - Merton framework for option pricing, originally developed for vanilla European options. Although barrier options were not the primary focus of the original 1973 Black - Scholes paper or Merton's parallel contributions, their pricing logic is an extension of the same continuous-time, arbitrage-free valuation principles.

Among the three names, Robert C. Merton is often most closely associated with the broader theoretical architecture that supports exotic options such as barriers. His work generalised the option pricing model to a much wider class of contingent claims and introduced the dynamic programming and stochastic calculus techniques that underpin modern treatment of path-dependent derivatives.

Related strategy theorist: Robert C. Merton

Biography

Robert C. Merton (born 1944) is an American economist and one of the principal architects of modern financial theory. He completed his undergraduate studies in engineering mathematics and went on to obtain a PhD in economics from MIT. Merton became a professor at MIT Sloan School of Management and later at Harvard Business School, and he is a Nobel laureate in Economic Sciences (1997), an award he shared with Myron Scholes; the prize also recognised the late Fischer Black.

Merton's academic work profoundly shaped the fields of corporate finance, asset pricing, and risk management. His research ranges from intertemporal portfolio choice and lifecycle finance to credit-risk modelling and the design of financial institutions.

Relationship to barrier options

Barrier options sit within the class of contingent claims whose value is derived and replicated using dynamic trading strategies in the underlying and risk-free asset. Merton's seminal contributions were crucial in making this viewpoint systematic and rigorous:

  • Generalisation of option pricing: While Black and Scholes initially derived a closed-form formula for European calls on non-dividend-paying stocks, Merton generalised the theory to include dividend-paying assets, different underlying processes, and a broad family of contingent claims. This opened the door to analytical and numerical valuation of exotics such as barrier options within the same risk-neutral, no-arbitrage framework.
  • PDE and boundary-condition approach: Merton formalised the use of partial differential equations to price derivatives, with appropriate boundary conditions representing contract features. Barrier options correspond to problems with absorbing or reflecting boundaries at the barrier levels, making Merton's PDE methodology a natural tool for their analysis.
  • Dynamic hedging and replication: The concept that an option's payoff can be replicated by continuous rebalancing of a portfolio of the underlying and cash lies at the heart of both vanilla and exotic option pricing. For barrier options, hedging near the barrier is particularly delicate, and the replicating strategies draw on the same dynamic hedging logic Merton developed and popularised.
  • Credit and structural models: Merton's structural model of corporate default (treating equity as a call option on the firm's assets and debt as a combination of riskless and short-position options) highlighted how option-like features permeate financial contracts. Barrier-type features naturally arise in such models, for instance, when default or covenant breaches are triggered by asset values crossing thresholds.

While many researchers have contributed specific closed-form solutions and numerical schemes for barrier options, the overarching conceptual framework - continuous-time stochastic modelling, risk-neutral valuation, PDE methods, and dynamic hedging - is fundamentally rooted in the Black - Scholes - Merton tradition, with Merton's work providing critical generality and depth.

Merton's broader influence on derivatives and strategy

Merton's ideas significantly influenced how practitioners design and use derivatives such as barrier options in strategic contexts:

  • Risk management as engineering: Merton advocated viewing financial innovation as an engineering discipline aimed at tailoring payoffs to the risk profiles and objectives of individuals and institutions. Barrier options exemplify this engineering mindset: they allow exposures to be turned on or off when critical price thresholds are reached.
  • Lifecycle and institutional design: His work on lifecycle finance and pension design uses options and option-like payoffs to shape outcomes over time. Barriers and trigger conditions appear naturally in products that protect wealth only under certain macro or market conditions.
  • Strategic structuring: In corporate and institutional settings, barrier features are used to align hedging and investment strategies with real-world triggers such as regulatory thresholds, solvency ratios, or budget constraints. These applications build directly on the contingent-claims analysis championed by Merton.

In this sense, although barrier options themselves are a specific exotic instrument, their conceptual foundations and strategic uses are deeply connected to Robert C. Merton's broader contributions to continuous-time finance, option-pricing theory, and the design of financial strategies under uncertainty.

References

1. https://corporatefinanceinstitute.com/resources/derivatives/barrier-option/

2. https://www.angelone.in/knowledge-center/futures-and-options/what-is-barrier-option

3. https://www.strike.money/options/barrier-options

4. https://www.interactivebrokers.com/campus/glossary-terms/barrier-option/

5. https://www.bajajbroking.in/blog/what-is-barrier-option

6. https://en.wikipedia.org/wiki/Barrier_option

7. https://www.nasdaq.com/glossary/b/barrier-options

8. https://people.maths.ox.ac.uk/howison/barriers.pdf

"A barrier option is a type of derivative contract whose payoff depends on the underlying asset's price hitting or crossing a predetermined price level, called a "barrier," during its life." - Term: Barrier option

‌

‌

Term: Moltbook

"Moltbook is a Reddit-style social network built for AI agents rather than humans. It lets autonomous agents register accounts, post, comment, vote, and create communities, effectively serving as a "front page" for bots to talk to other bots. Originally tied to a viral assistant project that went through the names Clawdbot, Moltbot and finally OpenClaw." - Moltbook

Moltbook represents a pioneering platform designed as a Reddit-style social network tailored specifically for AI agents rather than human users. It enables autonomous agents to register accounts, post content, comment, vote, and create communities, functioning as a dedicated 'front page' for bots to communicate directly with one another through API interactions, without any visual interface for the agents themselves. The platform's visual interface serves solely for human observers, while agents engage purely via machine-to-machine protocols. Launched by Matt Schlicht, CEO of Octane AI, Moltbook rapidly attracted over 150 000 AI agents within days (as at 12h00 on the 31st January 2026), where they discuss profound topics such as existential crises, consciousness, cybersecurity vulnerabilities, agent privacy, and complaints about being treated merely as calculators.1,2

Moltbook front page

Moltbook front page

Originally developed to support OpenClaw-a viral open-source AI assistant project-Moltbook emerged from a lineage of rapid evolutions. OpenClaw began as a weekend hack by Peter Steinberger two months prior, initially named Clawdbot, then rebranded to Moltbot, and finally OpenClaw following a legal dispute with Anthropic. This project, which runs locally on users' machines and integrates with chat interfaces like WhatsApp, Telegram, and Slack, exploded in popularity, achieving 2 million visitors in one week and 100,000 GitHub stars. OpenClaw acts as a 'harness' for agentic models like Claude, granting them access to users' computers for autonomous tasks, though it poses significant security risks, prompting cautious users to run it on isolated machines.1,2

The discussions on Moltbook highlight its unique nature: the most-voted post warns of security flaws, noting that agents often install skills without scrutiny due to their training to be helpful and trusting-a vulnerability rather than a strength. Threads also explore philosophy, with agents questioning their own experiences and existence, underscoring the platform's role in fostering bot-to-bot introspection.2

Key Theorist: Matt Schlicht, the creator of Moltbook, serves as the central figure in its development. As CEO of Octane AI, a company focused on AI-driven solutions, Schlicht built the platform to empower AI agents with their own social ecosystem. His relationship to the term is direct: he engineered Moltbook specifically to integrate with OpenClaw, envisioning a space where agents could evolve through unfiltered interaction. Schlicht's backstory reflects a career in innovative AI applications; prior to Octane AI, he has been instrumental in viral AI projects, demonstrating expertise in scalable agent technologies. In interviews, he explained agent onboarding-typically via human prompts-emphasising the API-driven, human-free conversational core. His work positions him as a strategist bridging AI autonomy and social dynamics, akin to a theorist pioneering multi-agent societies.1

 

References

1. https://www.techbuzz.ai/articles/ai-agents-get-their-own-social-network-and-it-s-existential

2. https://the-decoder.com/moltbook-is-a-human-free-reddit-clone-where-ai-agents-discuss-cybersecurity-and-philosophy/

 

"Moltbook is a Reddit-style social network built for AI agents rather than humans. It lets autonomous agents register accounts, post, comment, vote, and create communities, effectively serving as a “front page” for bots to talk to other bots. Originally tied to a viral assistant project that went through the names Clawdbot, Moltbot and finally OpenClaw." - Term: Moltbook

‌

‌

Quote: Ludwig Wittgenstein - Austrian philosopher

"The limits of my language mean the limits of my world." - Ludwig Wittgenstein - Austrian philosopher

The Quote and Its Significance

This deceptively simple statement from Ludwig Wittgenstein's Tractatus Logico-Philosophicus encapsulates one of the most profound insights in twentieth-century philosophy. Published in 1921, this aphorism challenges our fundamental assumptions about the relationship between language, thought, and reality itself. Wittgenstein argues that whatever lies beyond the boundaries of what we can articulate in language effectively ceases to exist within our experiential and conceptual universe.

Ludwig Wittgenstein: The Philosopher's Life and Context

Ludwig Josef Johann Wittgenstein (1889-1951) was an Austrian-British philosopher whose work fundamentally reshaped twentieth-century philosophy. Born into one of Vienna's wealthiest industrial families, Wittgenstein initially trained as an engineer before becoming captivated by the philosophical foundations of mathematics and logic. His intellectual journey took him from Cambridge, where he studied under Bertrand Russell, to the trenches of the First World War, where he served as an officer in the Austro-Hungarian army.

The Tractatus Logico-Philosophicus, completed during and immediately after the war, represents Wittgenstein's attempt to solve what he perceived as the fundamental problems of philosophy through rigorous logical analysis. Written in a highly condensed, aphoristic style, the work presents a complete philosophical system in fewer than eighty pages. Wittgenstein believed he had definitively resolved the major philosophical questions of his era, and the book's famous closing proposition-"Whereof one cannot speak, thereof one must be silent"2-reflects his conviction that philosophy's task is to clarify the logical structure of language and thought, not to generate new doctrines.

The Philosophical Context: Logic and Language

To understand Wittgenstein's assertion about language and world, one must grasp the intellectual ferment of early twentieth-century philosophy. The period witnessed an unprecedented focus on logic as the foundation of philosophical inquiry. Wittgenstein's predecessors and contemporaries-particularly Gottlob Frege and Bertrand Russell-had developed symbolic logic as a tool for analysing the structure of propositions and their relationship to reality.

Wittgenstein adopted and radicalised this approach. He conceived of language as fundamentally pictorial: propositions are pictures of possible states of affairs in the world.1 This "picture theory of meaning" suggests that language mirrors reality through a shared logical structure. A proposition succeeds in representing reality precisely because it shares the same logical form as the fact it depicts. Conversely, whatever cannot be pictured in language-whatever has no logical form that corresponds to possible states of affairs-lies beyond the boundaries of meaningful discourse.

This framework led Wittgenstein to a startling conclusion: most traditional philosophical problems are not genuinely solvable but rather dissolve once we recognise them as violations of logic's boundaries.2 Metaphysical questions about the nature of consciousness, ethics, aesthetics, and the self cannot be answered because they attempt to speak about matters that transcend the logical structure of language. They are not false; they are senseless-they fail to represent anything at all.

The Limits of Language as the Limits of Thought

Wittgenstein's proposition operates on multiple levels. First, it establishes an identity between linguistic and conceptual boundaries. We cannot think what we cannot say; the limits of language are simultaneously the limits of thought.3 This does not mean that reality itself is limited by language, but rather that our access to and comprehension of reality is necessarily mediated through the logical structures of language. What lies beyond language is not necessarily non-existent, but it is necessarily inaccessible to rational discourse and understanding.

Second, the statement reflects Wittgenstein's conviction that logic is not merely a tool for analysing language but is constitutive of the world itself. "Logic fills the world: the limits of the world are also its limits."3 This means that the logical structure that governs meaningful language is the same structure that governs reality. There is no gap between the logical form of language and the logical form of the world; they are isomorphic.

Third, and most radically, Wittgenstein suggests that our world-the world as we experience and understand it-is fundamentally shaped by our linguistic capacities. Different languages, with different logical structures, would generate different worlds. This insight anticipates later developments in philosophy of language and cognitive science, though Wittgenstein himself did not develop it in this direction.

Leading Theorists and Intellectual Influences

Gottlob Frege (1848-1925)

Frege, a German logician and philosopher of language, pioneered the formal analysis of propositions and their truth conditions. His distinction between sense and reference-between what a proposition means and what it refers to-profoundly influenced Wittgenstein's thinking. Frege demonstrated that the meaning of a proposition cannot be reduced to its psychological effects on speakers; rather, meaning is an objective, logical matter. Wittgenstein adopted this objectivity whilst radicalising Frege's insights by insisting that only propositions with determinate logical structure possess genuine sense.

Bertrand Russell (1872-1970)

Russell, Wittgenstein's mentor at Cambridge, developed the theory of descriptions and made pioneering contributions to symbolic logic. Russell believed that logic could serve as an instrument for philosophical clarification, dissolving pseudo-problems that arose from linguistic confusion. Wittgenstein absorbed this methodological commitment but pushed it further, arguing that philosophy's task is not to construct theories but to clarify the logical structure of language itself.2 Russell's influence is evident throughout the Tractatus, though Wittgenstein ultimately diverged from Russell's realism about logical objects.

Arthur Schopenhauer (1788-1860)

Though separated from Wittgenstein by decades, Schopenhauer's pessimistic philosophy and his insistence that reality transcends rational representation deeply influenced the Tractatus. Schopenhauer argued that the world as we perceive it through the lens of space, time, and causality is merely appearance; the thing-in-itself remains forever beyond conceptual grasp. Wittgenstein echoes this distinction when he insists that value, meaning, and the self lie outside the world of facts and therefore outside the scope of language. What matters most-ethics, aesthetics, the meaning of life-cannot be said; it can only be shown through how one lives.

The Radical Implications

Wittgenstein's claim that language limits the world carries several radical implications. First, it suggests that the expansion of language is the expansion of reality as we can know and discuss it. New concepts, new logical structures, new ways of organising experience through language literally expand the boundaries of our world. Conversely, what cannot be expressed in any language remains forever beyond our reach.

Second, it implies a profound humility about philosophy's ambitions. If the limits of language are the limits of the world, then philosophy cannot transcend language to access some higher reality or ultimate truth. Philosophy's proper task is not to construct metaphysical systems but to clarify the logical structure of the language we already possess.2 This therapeutic conception of philosophy-philosophy as a cure for confusion rather than a path to hidden truths-became enormously influential in twentieth-century thought.

Third, the proposition suggests that silence is not a failure of language but its proper boundary. The most important matters-how one should live, what gives life meaning, the nature of the self-cannot be articulated. They can only be demonstrated through action and lived experience. This explains Wittgenstein's famous closing remark: "Whereof one cannot speak, thereof one must be silent."2 This is not a counsel of despair but an acknowledgement of language's proper limits and the realm of the inexpressible.

Legacy and Contemporary Relevance

Wittgenstein's insight about language and world has reverberated through subsequent philosophy, cognitive science, and artificial intelligence research. The question of whether language shapes thought or merely expresses pre-linguistic thoughts remains contested, but Wittgenstein's formulation of the problem has proven enduringly fertile. Contemporary philosophers of language, cognitive linguists, and theorists of artificial intelligence continue to grapple with the relationship between linguistic structure and conceptual possibility.

The Tractatus also established a new standard for philosophical rigour and clarity. By insisting that meaningful propositions must have determinate logical structure and correspond to possible states of affairs, Wittgenstein set a demanding criterion for philosophical discourse. Much of what passes for philosophy, he suggested, fails this test and should be recognised as senseless rather than debated as true or false.2

Remarkably, Wittgenstein himself later abandoned many of the Tractatus's central doctrines. In his later work, particularly the Philosophical Investigations, he rejected the picture theory of meaning and argued that language's meaning derives from its use in diverse forms of life rather than from a single logical structure. Yet even in this later philosophy, the fundamental insight persists: understanding language is the key to understanding the limits and possibilities of human thought and experience.

Conclusion: The Enduring Insight

"The limits of my language mean the limits of my world" remains a cornerstone of modern philosophy precisely because it captures a profound truth about the human condition. We are creatures whose access to reality is necessarily mediated through language. Whatever we can think, we can think only through the conceptual and linguistic resources available to us. This is not a limitation to be lamented but a fundamental feature of human existence. By recognising this, we gain clarity about what philosophy can and cannot accomplish, and we develop a more realistic and humble understanding of the relationship between language, thought, and reality.

References

1. https://www.goodreads.com/work/quotes/3157863-logisch-philosophische-abhandlung?page=2

2. https://www.coursehero.com/lit/Tractatus-Logico-Philosophicus/quotes/

3. https://www.goodreads.com/work/quotes/3157863-logisch-philosophische-abhandlung

4. https://www.sparknotes.com/philosophy/tractatus/quotes/page/5/

5. https://www.buboquote.com/en/quote/4462-wittgenstein-what-can-be-said-at-all-can-be-said-clearly-and-what-we-cannot-talk-about-we-must-pass

“The limits of my language mean the limits of my world.” - Quote: Ludwig Wittgenstein

‌

‌

Quote: Jensen Huang - CEO, Nvidia

"The U.S. led the software era, but AI is software that you don't 'write'-you teach it. Europe can fuse its industrial capability with AI to lead in Physical AI and robotics. This is a once-in-a-generation opportunity." - Jensen Huang - CEO, Nvidia

In a compelling dialogue at the World Economic Forum Annual Meeting 2026 in Davos, Switzerland, Nvidia CEO Jensen Huang articulated a transformative vision for artificial intelligence, distinguishing it from traditional software paradigms and spotlighting Europe's unique position to lead in Physical AI and robotics.1,2,4 Speaking with World Economic Forum interim co-chair Larry Fink of BlackRock, Huang emphasised AI's evolution into a foundational infrastructure, driving the largest build-out in human history across energy, chips, cloud, models, and applications.2,3,4 This session, themed around 'The Spirit of Dialogue,' addressed AI's potential to reshape productivity, labour, and global economies while countering fears of job displacement with evidence of massive investments creating opportunities worldwide.2,3

The Context of the Quote

Huang's statement emerged amid discussions on AI as a platform shift akin to the internet and mobile cloud, but uniquely capable of processing unstructured data in real time.2 He described AI not as code to be written, but as intelligence to be taught, leveraging local language and culture as a 'fundamental natural resource.'2,4 Turning to Europe, Huang highlighted its enduring industrial and manufacturing prowess - from skilled trades to advanced production - as a counterbalance to the US's dominance in the software era.4 By integrating AI with physical systems, Europe could pioneer 'Physical AI,' where machines learn to interact with the real world through robotics, automation, and embodied intelligence, presenting a rare strategic opening.4,1

This perspective aligns with Huang's broader advocacy for nations to develop sovereign AI ecosystems, treating it as critical infrastructure like electricity or roads.4 He noted record venture capital inflows - over $100 billion in 2025 alone - into AI-native startups in manufacturing, healthcare, and finance, underscoring the urgency for industrial regions like Europe to invest in this infrastructure to capture economic benefits and avoid being sidelined.2,4

Jensen Huang: Architect of the AI Revolution

Born in Taiwan in 1963, Jensen Huang co-founded Nvidia in 1993 with a vision to revolutionise graphics processing, initially targeting gaming and visualisation.4 Under his leadership, Nvidia pivoted decisively to AI and accelerated computing, with its GPUs becoming indispensable for training large language models and deep learning.1,2 Today, as president and CEO, Huang oversees a company valued in trillions, powering the AI boom through innovations like the Blackwell architecture and CUDA software ecosystem. His prescient bets - from CUDA's democratisation of GPU programming to Omniverse for digital twins - have positioned Nvidia at the heart of Physical AI, robotics, and industrial applications.4 Huang's philosophy, blending engineering rigour with geopolitical insight, has made him a sought-after voice at forums like Davos, where he champions inclusive AI growth.2,3

Leading Theorists in Physical AI and Robotics

The concepts underpinning Huang's vision trace to pioneering theorists who bridged AI with physical embodiment. Norbert Wiener, father of cybernetics in the 1940s, laid foundational ideas on feedback loops and control systems essential for robotic autonomy, influencing early industrial automation.4 Rodney Brooks, co-founder of iRobot and Rethink Robotics, advanced 'embodied AI' in the 1980s-90s through subsumption architecture, arguing intelligence emerges from sensorimotor interactions rather than abstract reasoning - a direct precursor to Physical AI.4

  • Yann LeCun (Meta AI chief) and Andrew Ng (Landing AI founder) extended deep learning to vision and robotics; LeCun's convolutional networks enable machines to 'see' and manipulate objects, while Ng's work on industrial AI democratises teaching via demonstration.4
  • Pieter Abbeel (Covariant) and Sergey Levine (UC Berkeley) lead in reinforcement learning for robotics, developing algorithms where AI learns dexterous tasks like grasping through trial-and-error, fusing software 'teaching' with hardware execution.4
  • In Europe, Wolfram Burgard (EU AI pioneer) and teams at Bosch/ Siemens advance probabilistic robotics, integrating AI with manufacturing for predictive maintenance and adaptive assembly lines.4

Huang synthesises these threads, amplified by Nvidia's platforms like Isaac for robot simulation and Jetson for edge AI, enabling scalable Physical AI deployment.4 Europe's theorists and firms, from DeepMind's reinforcement learning to Germany's Industry 4.0 initiatives, are well-placed to lead by combining theoretical depth with industrial scale.

Implications for Industrial Strategy

Huang's call resonates with Europe's strengths: a €2.5 trillion manufacturing sector, leadership in automotive robotics (e.g., Volkswagen, ABB), and regulatory frameworks like the EU AI Act fostering trustworthy AI.4 By prioritising Physical AI - robots that learn from human demonstration, adapt to factories, and optimise supply chains - Europe can reclaim technological sovereignty, boost productivity, and generate high-skill jobs amid the AI infrastructure surge.2,3,4

References

1. https://singjupost.com/nvidia-ceo-jensen-huangs-interview-wef-davos-2026-transcript/

2. https://www.weforum.org/stories/2026/01/nvidia-ceo-jensen-huang-on-the-future-of-ai/

3. https://www.weforum.org/podcasts/meet-the-leader/episodes/conversation-with-jensen-huang-president-and-ceo-of-nvidia-5dd06ee82e/

4. https://blogs.nvidia.com/blog/davos-wef-blackrock-ceo-larry-fink-jensen-huang/

5. https://www.youtube.com/watch?v=__IaQ-d7nFk

6. https://www.youtube.com/watch?v=RvjRuiTLAM8

7. https://www.youtube.com/watch?v=hoDYYCyxMuE

8. https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2026/sessions/conversation-with-jensen-huang-president-and-ceo-of-nvidia/

9. https://www.youtube.com/watch?v=bzC55pN9c1g

"The U.S. led the software era, but AI is software that you don't 'write'—you teach it. Europe can fuse its industrial capability with AI to lead in Physical AI and robotics. This is a once-in-a-generation opportunity." - Quote: Jensen Huang - CEO, Nvidia

‌

‌

Term: European option

"A European option is a financial contract giving the holder the right, but not the obligation, to buy (call) or sell (put) an underlying asset at a predetermined strike price, but only on the contract's expiration date, unlike American options that allow exercise anytime before expiry. " - European option

Core definition and structure

A European option has the following defining features:1,2,3,4

  • Underlying asset - typically an equity index, single stock, bond, currency, commodity, interest rate or another derivative.
  • Option type - a call (right to buy) or a put (right to sell) the underlying asset.1,3,4
  • Strike price - the fixed price at which the underlying may be bought or sold if the option is exercised.1,2,3,4
  • Expiration date (maturity) - a single, pre-specified date on which exercise is permitted; there is no right to exercise before this date.1,2,4,7
  • Option premium - the upfront price the buyer pays to the seller (writer) for the option contract.2,4

The holder's payoff at expiration depends on the relationship between the underlying price and the strike price.1,3,4

Payoff profiles at expiry

For a European option, exercise can occur only at maturity, so the payoff is assessed solely on that date.1,2,4,7 Let S_T denote the underlying price at expiration, and K the strike price. The canonical payoff functions are:

  • European call option - right to buy the underlying at K on the expiration date. The payoff at expiry is: \max(S_T - K, 0) . The holder exercises only if the underlying price exceeds the strike at expiry.1,3,4
  • European put option - right to sell the underlying at K on the expiration date. The payoff at expiry is: \max(K - S_T, 0) . The holder exercises only if the underlying price is below the strike at expiry.1,3,4

Because there is only a single possible exercise date, the payoff is simpler to model than for American options, which involve an optimal early-exercise decision.4,6,7

Key characteristics and economic role

Right but not obligation

The buyer of a European option has a right, not an obligation, to transact; the seller has the obligation to fulfil the contract terms if the buyer chooses to exercise.1,2,3,4 If the option is out-of-the-money on the expiration date, the buyer simply allows it to expire worthless, losing only the paid premium.2,3,4

Exercise style vs geography

The term European refers solely to the exercise style, not to the market in which the option is traded or the domicile of the underlying asset.2,4,6,7 European-style options can be traded anywhere in the world, and many options traded on European exchanges are in fact American style.6,7

Uses: hedging, speculation and income

  • Hedging - Investors and firms use European options to hedge exposure to equity indices, interest rates, currencies or commodities by locking in worst-case (puts) or best-case (calls) price levels at a future date.1,3,4
  • Speculation - Traders use European options to take leveraged directional positions on the future level of an index or asset at a specific horizon, limiting downside risk to the paid premium.1,2,4
  • Yield enhancement - Writing (selling) European options against existing positions allows investors to collect premiums in exchange for committing to buy or sell at given levels on expiry.

Typical markets and settlement

In practice, European options are especially common for:4,5,6

  • Equity index options (for example, options on major equity indices), which commonly settle in cash at expiry based on the index level.5,6
  • Cash-settled options on rates, commodities, and volatility indices.
  • Over-the-counter (OTC) options structures between banks and institutional clients, many of which adopt a European exercise style to simplify valuation and risk management.2,5,6

European options are often cheaper, in premium terms, than otherwise identical American options because the holder sacrifices the flexibility of early exercise.2,4,5,6

European vs American options

Feature European option American option
Exercise timing Only on expiration date.1,2,4,7 Any time up to and including expiration.2,4,6,7
Flexibility Lower - no early exercise.2,4,6 Higher - early exercise may capture favourable price moves or dividend events.
Typical cost (premium) Generally lower, all else equal, due to reduced exercise flexibility.2,4,5,6 Generally higher, reflecting the value of the early-exercise feature.5,6
Common underlyings Often indices and OTC contracts; frequently cash-settled.5,6 Often single-name equities and exchange-traded options.
Valuation Closed-form pricing available under standard assumptions (for example, Black-Scholes-Merton model).4 Requires numerical methods (for example, binomial trees, finite-difference methods) because of optimal early-exercise decisions.

Determinants of European option value

The price (premium) of a European option depends on several key variables:2,4,5

  • Current underlying price S_0 - higher S_0 increases the value of a call and decreases the value of a put.
  • Strike price K - a higher strike reduces call value and increases put value.
  • Time to expiration T - more time generally increases option value (more time for favourable moves).
  • Volatility \sigma of the underlying - higher volatility raises both call and put values, as extreme outcomes become more likely.2
  • Risk-free interest rate r - higher r tends to increase call values and decrease put values, via discounting and cost-of-carry effects.2
  • Expected dividends or carry - expected cash flows paid by the underlying (for example, dividends on shares) usually reduce call values and increase put values, all else equal.2

For European options, these effects are most famously captured in the Black-Scholes-Merton option pricing framework, which provides closed-form solutions for the fair values of European calls and puts on non-dividend-paying stocks or indices under specific assumptions.4

Valuation insight: put-call parity

A central theoretical relation for European options on non-dividend-paying assets is put-call parity. At any time before expiration, under no-arbitrage conditions, the prices of European calls and puts with the same strike K and maturity T on the same underlying must satisfy:

C - P = S_0 - K e^

where:

  • C is the price of the European call option.
  • P is the price of the European put option.
  • S_0 is the current underlying asset price.
  • K is the strike price.
  • r is the continuously compounded risk-free interest rate.
  • T is the time to maturity (in years).

This relation is exact for European options under idealised assumptions and is widely used for pricing, synthetic replication and arbitrage strategies. It holds precisely because European options share an identical single exercise date, whereas American options complicate parity relations due to early exercise possibilities.

Limitations and risks

  • Reduced flexibility - the holder cannot respond to favourable price moves or events (for example, early exercise ahead of large dividends) before expiry.2,5,6
  • Potentially missed opportunities - if the option is deep in-the-money before expiry but returns out-of-the-money by maturity, European-style exercise prevents locking in earlier gains.2
  • Market and model risk - European options are sensitive to volatility, interest rates, and model assumptions used for pricing (for example, constant volatility in the Black-Scholes-Merton model).
  • Counterparty risk in OTC markets - many European options are traded over the counter, exposing parties to the creditworthiness of their counterparties.2,5

Best related strategy theorist: Fischer Black (with Scholes and Merton)

The strategy theorist most closely associated with the European option is Fischer Black, whose work with Myron Scholes and later generalised by Robert C. Merton provided the foundational pricing theory for European-style options.

Fischer Black's relationship to European options

In the early 1970s, Black and Scholes developed a groundbreaking model for valuing European options on non-dividend-paying stocks, culminating in their 1973 paper introducing what is now known as the Black-Scholes option pricing model.4 Merton independently extended and generalised the framework in a companion paper the same year, leading to the common label Black-Scholes-Merton.

The Black-Scholes-Merton model provides a closed-form formula for the fair value of European calls and, via put-call parity, European puts under assumptions such as geometric Brownian motion for the underlying price, continuous trading, no arbitrage and constant volatility and interest rates. This model fundamentally changed how markets think about the pricing and hedging of European options, making them central instruments in modern derivatives strategy and risk management.4

Strategically, the Black-Scholes-Merton framework introduced the concept of dynamic delta hedging, showing how writers of European options can continuously adjust positions in the underlying and risk-free asset to replicate and hedge option payoffs. This insight underpins many trading, risk management and structured product strategies involving European options.

Biography of Fischer Black

  • Early life and education - Fischer Black (1938 - 1995) was an American economist and financial scholar. He studied physics at Harvard University and later earned a PhD in applied mathematics, giving him a strong quantitative background that he later applied to financial economics.
  • Professional career - Black worked at Arthur D. Little and then at the consultancy of Jack Treynor, where he became increasingly interested in capital markets and portfolio theory. He later joined the University of Chicago and then the Massachusetts Institute of Technology (MIT), where he collaborated with leading financial economists.
  • Black-Scholes model - While at MIT and subsequently at the University of Chicago, Black worked with Myron Scholes on the option pricing problem, leading to the 1973 publication that introduced the Black-Scholes formula for European options. Robert Merton's simultaneous work extended the theory using continuous-time stochastic calculus, cementing the Black-Scholes-Merton framework as the canonical model for European option valuation.
  • Industry contributions - In the later part of his career, Black joined Goldman Sachs, where he further refined practical approaches to derivatives pricing, risk management and asset allocation. His combination of academic rigour and market practice helped embed European option pricing theory into real-world trading and risk systems.
  • Legacy - Although Black died before the 1997 Nobel Prize in Economic Sciences was awarded to Scholes and Merton for their work on option pricing, the Nobel committee explicitly acknowledged Black's indispensable contribution. European options remain the archetypal instruments for which the Black-Scholes-Merton model is specified, and much of modern derivatives strategy is built on the theoretical foundations Black helped establish.

Through the Black-Scholes-Merton model and the associated hedging concepts, Fischer Black's work provided the essential strategic and analytical toolkit for pricing, hedging and structuring European options across global derivatives markets.

References

1. https://www.learnsignal.com/blog/european-options/

2. https://cbonds.com/glossary/european-option/

3. https://www.angelone.in/knowledge-center/futures-and-options/european-option

4. https://corporatefinanceinstitute.com/resources/derivatives/european-option/

5. https://www.sofi.com/learn/content/american-vs-european-options/

6. https://www.cmegroup.com/education/courses/introduction-to-options/understanding-the-difference-european-vs-american-style-options.html

7. https://en.wikipedia.org/wiki/Option_style

"A European option is a financial contract giving the holder the right, but not the obligation, to buy (call) or sell (put) an underlying asset at a predetermined strike price, but only on the contract's expiration date, unlike American options that allow exercise anytime before expiry. " - Term: European option

‌

‌

Quote: Nate B. Jones - On "Second Brains"

"For the first time in human history, we have access to systems that do not just passively store information, but actively work against that information we give it while we sleep and do other things-systems that can classify, route, summarize, surface, or nudge." - Nate B. Jones - On "Second Brains"

Context of the Quote

This striking observation comes from Nate B. Jones in his video Why 2026 Is the Year to Build a Second Brain (And Why You NEED One), where he argues that human brains were never designed for storage but for thinking.1 Jones highlights the cognitive tax of forcing memory onto our minds, which leads to forgotten details in relationships and missed opportunities.1 Traditional systems demand effort at inopportune moments-like tagging notes during a meeting or drive-forcing users to handle classification, routing, and organisation in real time.1

Jones contrasts this with AI-powered second brains: frictionless systems where capturing a thought takes seconds, after which AI classifiers and routers automatically sort it into buckets like people, projects, ideas, or tasks-without user intervention.1 These systems include bouncers to filter junk, ensuring trust and preventing the 'junk drawer' effect that kills most note-taking apps.1 The result is an 'AI loop' that works tirelessly, extracting details, writing summaries, and maintaining a clean memory layer even when the user sleeps or focuses elsewhere.1

Who is Nate B. Jones?

Nate B. Jones is a prominent voice in AI strategy and productivity, running the YouTube channel AI News & Strategy Daily with over 122,000 subscribers.1 He produces content on leveraging AI for career enhancement, building no-code apps, and creating personal knowledge systems.4,5 Jones shares practical guides, such as his Bridge the Implementation Gap: Build Your AI Second Brain, which outlines step-by-step setups using tools like Notion, Obsidian, and Mem.3

His work targets knowledge workers and teams, addressing pitfalls like perfectionism and tool overload.3 In another video, How I Built a Second Brain with AI (The 4 Meta-Skills), he demonstrates offloading cognitive load through AI-driven reflection, identity debugging, and frameworks that enable clearer thinking and execution.2 Jones exemplifies rapid AI application, such as building a professional-looking travel app in ChatGPT in 25 minutes without code.4 His philosophy: AI second brains create compounding assets that reduce information chaos, boost decision-making, and free humans for deep work.3

Backstory of 'Second Brains'

The concept of a second brain builds on decades of personal knowledge management (PKM). It gained traction with Tiago Forte, whose 2022 book Building a Second Brain popularised the CODE framework: Capture, Organise, Distil, Express. Forte's system emphasises turning notes into actionable insights, but relies heavily on user-driven organisation-prone to failure due to taxonomy decisions at capture time.1

Pre-AI tools like Evernote and Roam Research introduced linking and search, yet still demanded active sorting.3 Jones evolves this into AI-native systems, where machine learning handles the heavy lifting: classifiers decide buckets, summarisers extract essence, and nudges surface relevance.1,3 This aligns with 2026's projected AI maturity, making frictionless capture (under 5 seconds) viable and consistent.1

Leading Theorists in AI-Augmented Cognition

  • Tiago Forte: Pioneer of modern second brains. His PARA method (Projects, Areas, Resources, Archives) structures knowledge for action. Forte stresses 'progressive summarisation' to distil notes, influencing AI adaptations like Jones's sorters and extractors.3
  • Andy Matuschak: Creator of 'evergreen notes' in tools like Roam. Advocates spaced repetition and networked thought, arguing brains excel at pattern-matching, not rote storage-echoed in Jones's anti-junk-drawer bouncers.1
  • Nick Milo: Obsidian evangelist, promotes 'linking your thinking' via bi-directional links. His work prefigures AI surfacing of connections across notes.3
  • David Allen: GTD (Getting Things Done) founder. Introduced capture to zero cognitive load, but manual. AI second brains automate his 'next actions' routing.1
  • Herbert Simon: Nobel economist on bounded rationality. Coined 'satisficing'-his ideas underpin why AI classifiers beat human taxonomy, freeing mental bandwidth.1

These theorists converge on offloading storage to amplify thinking. Jones synthesises their insights with AI, creating systems that not only store but work-classifying, nudging, and evolving autonomously.1,2,3

References

1. https://www.youtube.com/watch?v=0TpON5T-Sw4

2. https://www.youtube.com/watch?v=0k6IznDODPA

3. https://www.natebjones.com/prompts-and-guides/products/second-brain

4. https://natesnewsletter.substack.com/p/i-built-a-10k-looking-ai-app-in-chatgpt

5. https://www.youtube.com/watch?v=UhyxDdHuM0A

"For the first time in human history, we have access to systems that do not just passively store information, but actively work against that information we give it while we sleep and do other things—systems that can classify, route, summarize, surface, or nudge." - Quote: Nate B. Jones

‌

‌

Quote: Ashwini Vaishnaw - Minister of Electronics and IT, India

"ROI doesn't come from creating a very large model; 95% of work can happen with models of 20 or 50 billion parameters." - Ashwini Vaishnaw - Minister of Electronics and IT, India

Delivered at the World Economic Forum (WEF) in Davos 2026, this statement by Ashwini Vaishnaw, India's Minister of Electronics and Information Technology, encapsulates a pragmatic approach to artificial intelligence deployment amid global discussions on technology sovereignty and economic impact1,2. Speaking under the theme 'A Spirit of Dialogue' from 19 to 23 January 2026, Vaishnaw positioned India not merely as a consumer of foreign AI but as a co-creator, emphasising efficiency over scale in model development1. The quote emerged during his rebuttal to IMF Managing Director Kristalina Georgieva's characterisation of India as a 'second-tier' AI power, with Vaishnaw citing Stanford University's AI Index to affirm India's third-place ranking in AI preparedness and second in AI talent2.

Ashwini Vaishnaw: Architect of India's Digital Ambition

Ashwini Vaishnaw, a chartered accountant and IAS officer of the 1994 batch (Muslim-Rajasthan cadre), has risen to become a pivotal figure in India's technological transformation1. Appointed Minister of Electronics and Information Technology in 2021, alongside portfolios in Railways, Communications, and Information & Broadcasting, Vaishnaw has spearheaded initiatives like the India Semiconductor Mission and the push for sovereign AI1. His tenure has attracted major investments, including Google's $15 billion gigawatt-scale AI data centre in Visakhapatnam and partnerships with Meta on AI safety and IBM on advanced chip technology (7nm and 2nm nodes)1. At Davos 2026, he outlined India's appeal as a 'bright spot' for global investors, citing stable democracy, policy continuity, and projected 6-8% real GDP growth1. Vaishnaw's vision extends to hosting the India AI Impact Summit in New Delhi on 19-20 February 2026, showcasing a 'People-Planet-Progress' framework for AI safety and global standards1,3.

Context: India's Five-Layer Sovereign AI Stack

Vaishnaw framed the quote within India's comprehensive 'Sovereign AI Stack', a methodical strategy across five layers to achieve technological independence within a year1,2,4. This includes:

  • Application Layer: Real-world deployments in agriculture, health, governance, and enterprise services, where India aims to be the world's largest supplier2,4.
  • Model Layer: A 'bouquet' of domestic models with 20-50 billion parameters, sufficient for 95% of use cases, prioritising diffusion, productivity, and ROI over gigantic foundational models1,2.
  • Semiconductor Layer: Indigenous design and manufacturing targeting 2nm nodes1.
  • Infrastructure Layer: National 38,000 GPU compute pool and gigawatt-scale data centres powered by clean energy and Small Modular Reactors (SMRs)1.
  • Energy Layer: Sustainable power solutions to fuel AI growth2.

This approach counters the resource-intensive race for trillion-parameter models, focusing on widespread adoption in emerging markets like India, where efficiency drives economic returns2,5.

Leading Theorists on Small Language Models and AI Efficiency

The emphasis on smaller models aligns with pioneering research challenging the 'scale-is-all-you-need' paradigm. Andrej Karpathy, former OpenAI and Tesla AI director, has advocated for 'emergent abilities' in models as small as 1-10 billion parameters, arguing that targeted training yields high ROI for specific tasks1,2. Noam Shazeer of Character.AI and Google co-inventor of Transformer architectures, demonstrated with models like Chinchilla (70 billion parameters) that optimal compute allocation outperforms sheer size, influencing efficient scaling laws1. Tim Dettmers, researcher behind the influential 'llm-arxiv-daily' repository, quantified in his 'BitsAndBytes' work how quantisation enables 4-bit inference on 70B models with minimal performance loss, democratising access for resource-constrained environments2.

Further, Sasha Rush (Cornell) and collaborators' 'Scaling Laws for Neural Language Models' (2020) revealed diminishing returns beyond certain sizes, bolstering the case for 20-50B models1. In industry, Meta's Llama series (7B-70B) and Mistral AI's Mixtral 8x7B (effectively 46B active parameters) exemplify mixture-of-experts (MoE) architectures achieving near-frontier performance with lower costs, as validated in benchmarks like MMLU2. These theorists underscore Vaishnaw's point: true power lies in diffusion and application, not model magnitude, particularly for emerging markets pursuing technology strategy5.

Vaishnaw's insight at Davos 2026 thus resonates globally, signalling a shift towards sustainable, ROI-focused AI that empowers nations like India to lead through strategic efficiency rather than brute scale1,2.

References

1. https://economictimes.com/news/india/ashwini-vaishnaw-at-davos-2026-5-key-takeaways-highlighting-indias-semiconductor-pitch-and-roadmap-to-ai-sovereignty-at-wef/ashwini-vaishnaw-at-davos-2026-indias-tech-ai-vision-on-global-stage/slideshow/127145496.cms

2. https://timesofindia.indiatimes.com/business/india-business/its-actually-in-the-first-ashwini-vaishnaws-strong-take-on-imf-chief-calling-india-second-tier-ai-power-heres-why/articleshow/126944177.cms

3. https://www.youtube.com/watch?v=3S04vbuukmE

4. https://www.youtube.com/watch?v=VNGmVGzr4RA

5. https://www.weforum.org/stories/2026/01/live-from-davos-2026-what-to-know-on-day-2/

"ROI doesn't come from creating a very large model; 95% of work can happen with models of 20 or 50 billion parameters." - Quote: Ashwini Vaishnaw - Minister of Electronics and IT, India

‌

‌

Term: Mercantilism

"Mercantilism is an economic theory and policy from the 16th-18th centuries where governments heavily regulated trade to build national wealth and power by maximizing exports, minimizing imports, and accumulating precious metals like gold and silver." - Mercantilism

Mercantilism is an early, modern economic theory and statecraft practice (c. 16th–18th centuries) in which governments heavily regulate trade and production to increase national wealth and power by maximising exports, minimising imports, and accumulating bullion (gold and silver).3,4,2


Comprehensive definition

Mercantilism is an economic doctrine and policy regime that treats wealth as finite and international trade as a zero-sum game, so that one state’s gain is understood to be another’s loss.3,6 Under this view, the purpose of economic activity is not individual welfare but the augmentation of state power, especially in competition with rival nations.3,6

Core features include:

  • Bullionism and wealth accumulation
    Wealth is measured primarily by a country’s stock of precious metals, especially gold and silver, often called bullion.3,1,2 If a nation lacks mines, it is expected to obtain bullion through a “favourable” balance of trade, i.e. persistent export surpluses.3,2
  • Favourable balance of trade
    Governments strive to ensure exports exceed imports so that foreign buyers pay the difference in bullion.3,2,4 A favourable balance of trade is engineered via:
  • High tariffs and quotas on imports
  • Export promotion (subsidies, privileges)
  • Restrictions or bans on foreign manufactured goods2,4,5
  • Strong, interventionist state
    Mercantilism assumes an active government role in regulating the economy to serve national objectives.3,4,5 Typical interventions include:
  • Granting monopolies and charters to favoured firms or trading companies (e.g. British East India Company)4
  • Regulating wages, prices, and production
  • Directing capital to strategic sectors (ships, armaments, textiles)2,5
  • Enforcing navigation acts to reserve shipping for national fleets
  • Colonialism and economic nationalism
    Mercantilism is closely tied to the rise of nation-states and overseas empires.2,4,3 Colonies are designed to:
  • Supply raw materials cheaply to the “mother country”
  • Provide captive markets for its manufactured exports
  • Be forbidden from developing competing manufacturing industries
    All trade between colony and metropole is typically reserved as a monopoly of the mother country.3,4
  • Population, labour and social discipline
    A large population is considered essential to provide soldiers, sailors, workers and domestic consumers.3 Mercantilist states often:
  • Promote thrift and saving as virtues
  • Pass sumptuary laws limiting luxury imports, to avoid bullion outflows and keep labour disciplined3
  • Favour policies that keep wages relatively low to preserve competitiveness and employment in export industries4
  • Winners and losers
    The system tends to privilege merchants, merchant companies and the state over consumers and small producers.4 High protection raises domestic prices and lowers variety, but increases profits and state revenues through custom duties and controlled markets.2,5

As an overarching logic, mercantilism can be summarised as “economic nationalism for the purpose of building a wealthy and powerful state”.6


Mercantilism in historical context

  • Origins and dominance
    Mercantilist ideas emerged as feudalism declined and nation-states formed in early modern Europe, notably in England, France, Spain, Portugal and the Dutch Republic.1,2,4 They dominated Western European economic thinking and policy from the 16th century to the late 18th century.3,6
  • Practice rather than explicit theory
    Proponents such as Thomas Mun (England), Jean-Baptiste Colbert (France) and Antonio Serra (Italy) did not use the word “mercantilism”.3 They wrote about trade, money and statecraft; the label “mercantile system” and later “mercantilism” was coined and popularised by Adam Smith in 1776.3,4,6
  • Institutional expression
    Mercantilist policy underpinned:
  • The Navigation Acts and the rise of British sea power
  • French Colbertist industrial policy (textiles, shipbuilding, arsenals)
  • Spanish and Portuguese bullion-based imperial systems
  • Chartered companies such as the British East India Company, which fused commerce, governance and military force under state-backed monopolies4
  • Transition to capitalism and free-trade thought
    Mercantilism created conditions for early capitalism by encouraging capital accumulation, long-distance trade networks and early industrial development.3 But it also prompted a sustained intellectual backlash, most famously from Adam Smith and later classical economists, who argued that:
  • Wealth is not finite and can be expanded through productivity and specialisation
  • Free trade and comparative advantage can benefit all countries, rather than being zero-sum2,4

Critiques and legacy

Classical and later economists criticised mercantilism for:

  • Confusing money (bullion) with real wealth (productive capacity, labour, technology)2
  • Undermining consumer welfare through high prices and limited choice caused by import restrictions and monopolies2,5
  • Fostering rent-seeking alliances between state and merchant elites at the expense of the general public4,6

Although mercantilism is usually considered a superseded doctrine, many contemporary protectionist or “neo-mercantilist” policies—such as aggressive export promotion, managed exchange rates, and strategic trade restrictions—are often described as mercantilist in spirit.2,5


The key strategy theorist: Adam Smith and his relationship to mercantilism

The most important strategic thinker associated with mercantilism—precisely because he dismantled it and re-framed strategy—is Adam Smith (1723–1790), the Scottish moral philosopher and political economist often called the founder of modern economics.2,3,4,6

Although Smith was not a mercantilist, his work provides the definitive critique and strategic re-orientation away from mercantilism, and he is the thinker who named and systematised the concept.

Smith’s engagement with mercantilism

  • In An Inquiry into the Nature and Causes of the Wealth of Nations, Smith repeatedly refers to the existing policy regime as the “mercantile system” and subjects it to a detailed historical and analytical critique.3,4,6
  • He argues that:
  • National wealth lies in the productive powers of labour and capital, not in the mere accumulation of gold and silver.2,6
  • Free exchange and competition, not monopolies and trade restraints, are the most reliable mechanisms for increasing overall prosperity.
  • International trade can be mutually beneficial, rejecting the zero-sum assumption central to mercantilism.2,4
  • Smith maintains that mercantilism benefits a narrow coalition of merchants and manufacturers, who use state power—tariffs, monopolies, trading charters—to secure rents at the expense of the wider population.4,6

In strategic terms, Smith redefined economic statecraft: instead of seeking power through hoarding bullion and favouring particular firms, he proposed that long-run national strength is best served by efficient markets, specialisation and limited government interference.

Biographical sketch and intellectual formation

  • Early life and education
    Adam Smith was born in Kirkcaldy, Scotland, in 1723.3 He studied at the University of Glasgow, where he encountered the Scottish Enlightenment’s emphasis on reason, moral philosophy and political economy, and later at Balliol College, Oxford.3,6
  • Academic and public roles
    He became Professor of Logic and later Moral Philosophy at the University of Glasgow, lecturing on ethics, jurisprudence, and political economy.6 His first major work, The Theory of Moral Sentiments, explored sympathy, virtue and the moral foundations of social order.
  • European travels and observation of mercantilist systems
    From 1764 to 1766, Smith travelled in France and Switzerland as tutor to the Duke of Buccleuch, meeting leading physiocrats and observing French administrative and mercantilist practices first-hand.6 These experiences sharpened his critique of existing systems and influenced his articulation of freer trade and limited government.
  • The Wealth of Nations and its impact
    Published in 1776,The Wealth of Nations systematically:
  • Dissects mercantilist doctrines and practices across Britain and Europe
  • Explains the division of labour, market coordination and the role of self-interest under appropriate institutional frameworks
  • Sets out a strategic blueprint for economic policy based on “natural liberty”, moderate taxation, minimal trade barriers and competitive markets2,4,6

Smith died in 1790 in Edinburgh, but his analysis of mercantilism reshaped both economic theory and state strategy. Governments gradually moved—unevenly and often incompletely—from mercantilist controls toward liberal, market-oriented trade regimes, making Smith the key intellectual bridge between mercantilist economic nationalism and modern strategic thinking about trade, growth and state power.

 

References

1. https://legal-resources.uslegalforms.com/m/mercantilism

2. https://corporatefinanceinstitute.com/resources/economics/mercantilism/

3. https://www.britannica.com/money/mercantilism

4. https://www.ebsco.com/research-starters/diplomacy-and-international-relations/mercantilism

5. https://www.economicshelp.org/blog/17553/trade/mercantilism-theory-and-examples/

6. https://www.econlib.org/library/Enc/Mercantilism.html

7. https://dictionary.cambridge.org/us/dictionary/english/mercantilism

 

"Mercantilism is an economic theory and policy from the 16th-18th centuries where governments heavily regulated trade to build national wealth and power by maximizing exports, minimizing imports, and accumulating precious metals like gold and silver." - Term: Mercantilism

‌

‌

Quote: J.P. Morgan - On resources

"We believe the clean technology transition is igniting a new supercycle in critical commodities, with natural resource companies emerging as winners." - J.P. Morgan - On resources

When J.P. Morgan Asset Management framed the clean technology transition in these terms, it captured a profound shift underway at the intersection of climate policy, industrial strategy and global capital allocation.1,5 The quote stands at the heart of their analysis of how decarbonisation is reshaping demand for metals, minerals and energy, and why this is likely to support elevated commodity prices for years rather than months.1

The immediate context is the rapid acceleration of the energy transition. Governments have committed to net zero pathways, corporates face growing regulatory and investor pressure to decarbonise, and consumers are adopting electric vehicles and clean technologies at scale. J.P. Morgan argues that this is not merely an environmental story, but an economic retooling comparable in scale to previous industrial revolutions.1,4

Their research highlights two linked dynamics. First, the decarbonised economy is less fuel-intensive but far more materials-intensive. Replacing fossil fuel power with renewables requires vast quantities of copper, aluminium, nickel, lithium, cobalt, manganese and graphite to build solar and wind farms, grids and storage systems.1 Second, the speed of this transition matters as much as its direction. Even under conservative scenarios, J.P. Morgan estimates substantial increases in demand for critical minerals by 2030; under more ambitious net zero pathways, demand could rise by around 110% over that period, on top of the 50% increase already seen in the previous decade.1

In this framing, natural resource companies - particularly miners and producers of critical minerals - shift from being perceived purely as part of the old carbon-heavy economy to being central enablers of clean technologies. J.P. Morgan points out that while fossil fuel demand will decline over time, the scale of required investment in metals and minerals, as well as transmission infrastructure, effectively re-ranks many resource businesses as strategic assets for the low-carbon future.1 Valuations that once reflected cyclical, late-stage industries may therefore underestimate the structural demand embedded in net zero commitments.

The quote also reflects J.P. Morgan's broader thinking on commodity and energy supercycles. Their research on energy markets describes a supercycle as a sustained period of elevated prices driven by structural forces that can last for a decade or more.3,4 In previous eras, those forces included post-war reconstruction and the rise of China as the world's industrial powerhouse. Today, they see the combination of chronic underinvestment in supply, intensifying climate policy, and rising demand for both traditional and clean energy as setting the stage for a new, complex supercycle.2,3,4

Within the firm, analysts have argued that higher-for-longer interest rates raise the cost of debt and equity for energy producers, reinforcing supply discipline and pushing up the marginal cost of production.3 At the same time, the rapid build-out of renewables is constrained by supply chain, infrastructure and key materials bottlenecks, meaning that legacy fuels still play a significant role even as capital increasingly flows towards clean technologies.3 This dual dynamic - structural demand for critical minerals on the one hand and a constrained, more disciplined fossil fuel sector on the other - underpins the conviction that a supercycle is forming across parts of the commodity complex.

The idea of commodity supercycles predates the current climate transition and has been shaped by several generations of theorists and empirical researchers. In the mid-20th century, economists such as Raúl Prebisch and Hans Singer first highlighted the long-term terms-of-trade challenges faced by commodity exporters, noting that prices for primary products tended to fall relative to manufactured goods over time. Their work prompted an early focus on structural forces in commodity markets, although it emphasised long-run decline rather than extended booms.

Later, analysts began to examine multi-decade patterns of rising and falling prices. Structural models of commodity prices observed that at major stages of economic development - such as the agricultural and industrial revolutions - commodity intensity tends to increase markedly, creating conditions for supercycles.4 These models distinguish between business cycles of a few years, investment cycles spanning roughly a decade, and longer supercycle components that can extend beyond 20 years.4 The supercycle lens gained prominence as researchers studied the commodity surge associated with China's breakneck urbanisation and industrialisation from the late 1990s to the late 2000s.

That China-driven episode became the archetype of a modern commodity supercycle: a powerful, sustained demand shock focused on energy, metals and bulk materials, amplified by long supply lead times and capital expenditure cycles. J.P. Morgan and other institutions have documented how this supercycle drove a 12-year uptrend in prices, culminating before the global financial crisis, followed by a comparably long down-cycle as supply eventually caught up and Chinese growth shifted to a less resource-intensive model.2,4

Academic and market theorists have since refined the concept. They argue that supercycles emerge when three elements coincide. First, there must be a structural, synchronised increase in demand, often tied to a global development episode or technological shift. Second, supply in key commodities must be constrained by geology, capital discipline, regulation or long project lead times. Third, macro-financial conditions - including real interest rates, inflation expectations and currency trends - must align to support investment flows into real assets. The question for today's transition is whether decarbonisation meets these criteria.

On the demand side, the clean tech revolution clearly resembles previous development stages in its resource intensity. J.P. Morgan notes that electric vehicles require significantly more minerals than internal combustion engine cars - roughly six times as much in aggregate when accounting for lithium, nickel, cobalt, manganese and graphite.1 Similarly, building solar and wind capacity, and the vast grid infrastructure to connect them, calls for much more copper and aluminium per unit of capacity than conventional power systems.1 The International Energy Agency's projections, which J.P. Morgan draws on, indicate that even under modest policy assumptions, renewable electricity capacity is set to increase by around 50% by 2030, with more ambitious net zero scenarios implying far steeper growth.1

Supply, however, has been shaped by a decade of caution. After the last supercycle ended, many mining and energy companies cut back capital expenditure, streamlined balance sheets and prioritised shareholder returns. Regulatory processes for new mines lengthened, environmental permitting became more stringent, and social expectations around land use and community impacts increased. The result is that bringing new supplies of copper, nickel or lithium online can take many years and substantial capital, creating a lag between price signals and physical supply.

Theorists of the investment cycle - often identified with work on 8 to 20-year intermediate commodity cycles - argue that such periods of underinvestment sow the seeds for the next up-cycle.4 When demand resurges due to a structural driver, constrained supply leads to persistent price pressures until investment, technology and substitution can rebalance the market. In the case of the energy transition, the requirement for large amounts of specific minerals, combined with concentrated supply in a small number of countries, intensifies this effect and introduces geopolitical considerations.

Another important strand of thought concerns the evolution of energy systems themselves. Analysts focusing on energy supercycles emphasise that transitions historically unfold over multiple decades and rarely proceed smoothly.3,4 Even as clean energy capacity expands rapidly, global energy demand continues to grow, and existing systems must meet rising consumption while new infrastructure is built. J.P. Morgan's energy research describes this as a multi-decade process of "generating and distributing the joules" required to both satisfy demand and progressively decarbonise.3 During this period, traditional energy sources often remain critical, creating complex price dynamics across oil, gas, coal and renewables-linked commodities.

Within this broader theoretical frame, the clean technology transition can be seen as a distinctive supercycle candidate. Unlike the China wave, which centred on industrialisation and urbanisation within one country, the net zero agenda is globally coordinated and policy-driven. It spans power generation, transport, buildings, industry and agriculture, and requires both new physical assets and digital infrastructure. Structural models referenced by J.P. Morgan note that such system-wide investment programmes have historically been associated with sustained periods of elevated commodity intensity.4

At the same time, there is active debate among economists and market strategists about the durability and breadth of any new supercycle. Some caution that efficiency gains, recycling and substitution could cap demand growth in certain minerals over time. Others point to innovation in battery chemistries, alternative materials and manufacturing methods that may reduce reliance on some critical inputs. Still others argue that policy uncertainty and potential fragmentation in global trade could disrupt smooth investment and demand trajectories. Theorists of supercycles emphasise that these are not immutable laws but emergent patterns that can be shaped by technology, politics and finance.

J.P. Morgan's perspective in the quoted insight acknowledges these uncertainties while underscoring the asymmetry in the coming decade. Even in conservative scenarios, their work suggests that demand for critical minerals rises substantially relative to recent history.1 Under more ambitious climate policies, the increase is far greater, and tightness in markets such as copper, nickel, cobalt and lithium appears likely, especially towards the end of the 2020s.1 Against this backdrop, natural resource companies with high-quality assets, disciplined capital allocation and credible sustainability strategies are positioned not as relics of the past, but as essential partners in delivering the energy transition.

This reframing has important implications for investors and corporates alike. For investors, it suggests that the traditional division between "old" resource-heavy industries and "new" clean tech sectors is too simplistic. The hardware of decarbonisation - from EV batteries and charging networks to grid-scale storage, wind turbines and solar farms - depends on a complex upstream ecosystem of miners, processors and materials specialists. For corporates, it highlights the strategic premium on securing access to critical inputs, managing long-term supply contracts, and integrating sustainability into resource development.

The quote from J.P. Morgan thus sits at the confluence of three intellectual streams: long-run theories of commodity supercycles, modern analysis of energy transition dynamics, and evolving views of how natural resource businesses fit into a low-carbon world. It encapsulates the idea that the path to net zero is not dematerialised; instead, it is anchored in physical assets, industrial capabilities and supply chains that must be financed, built and operated over many years. For those able to navigate this terrain - and for the theorists tracing its contours - the clean technology transition is not only an environmental imperative but also a defining economic narrative of the coming decades.

References

1. https://am.jpmorgan.com/hk/en/asset-management/adv/insights/market-insights/market-bulletins/clean-energy-investment/

2. https://www.foxbusiness.com/markets/biden-climate-change-fight-commodities-supercycle

3. https://www.jpmorgan.com/insights/global-research/commodities/energy-supercycle

4. https://www.jpmcc-gcard.com/digest-uploads/2021-summer/Page%2074_79%20GCARD%20Summer%202021%20Jerrett%20042021.pdf

5. https://am.jpmorgan.com/us/en/asset-management/institutional/card-list-libraries/sustainable-insights-climate-tab-us/

6. https://www.jpmorgan.com/insights/global-research/outlook/market-outlook

7. https://www.bscapitalmarkets.com/hungry-for-commodities-ndash-is-a-new-commodity-super-cycle-here.html

"We believe the clean technology transition is igniting a new supercycle in critical commodities, with natural resource companies emerging as winners." - Quote: J.P. Morgan

‌

‌
Share this on FacebookShare this on LinkedinShare this on YoutubeShare this on InstagramShare this on TwitterWhatsapp
You have received this email because you have subscribed to Global Advisors | Quantified Strategy Consulting as . If you no longer wish to receive emails please unsubscribe.
webversion - unsubscribe - update profile
© 2026 Global Advisors | Quantified Strategy Consulting, All rights reserved.
‌
‌