‌
Global Advisors
‌
‌
‌

A daily bite-size selection of top business content.

PM edition. Issue number 1217

Latest 10 stories. Click the button for more.

Read More
‌
‌
‌

Quote: Andrew Ng - AI guru, Coursera founder

"Someone that knows how to use AI will replace someone that doesn't, even if AI itself won't replace a person. So getting through the hype to give people the skills they need is critical." - Andrew Ng - AI guru, Coursera founder

The distinction Andrew Ng draws between AI replacing jobs and AI-capable workers replacing their peers represents a fundamental reorientation in how we should understand technological disruption. Rather than framing artificial intelligence as an existential threat to employment, Ng's observation-articulated at the World Economic Forum in January 2026-points to a more granular reality: the competitive advantage lies not in the technology itself, but in human mastery of it.

The Context of the Statement

Ng made these remarks during a period of intense speculation about AI's labour market impact. Throughout 2025 and into early 2026, technology companies announced significant workforce reductions, and public discourse oscillated between utopian and apocalyptic narratives about automation. Yet Ng's position, grounded in his extensive experience building AI systems and training professionals, cuts through this polarisation with empirical observation.

Speaking at Davos on 19 January 2026, Ng emphasised that "for many jobs, AI can only do 30-40 per cent of the work now and for the foreseeable future." This technical reality underpins his broader argument: the challenge is not mass technological unemployment, but rather a widening productivity gap between those who develop AI competency and those who do not. The implication is stark-in a world where AI augments rather than replaces human labour, the person wielding these tools becomes exponentially more valuable than the person without them.

Understanding the Talent Shortage

The urgency behind Ng's call for skills development is rooted in concrete market dynamics. According to research cited by Ng, demand for AI skills has grown approximately 21 per cent annually since 2019. More dramatically, AI jumped from the 6th most scarce technology skill globally to the 1st in just 18 months. Fifty-one per cent of technology leaders report struggling to find candidates with adequate AI capabilities.

This shortage exists not because AI expertise is inherently rare, but because structured pathways to acquiring it remain underdeveloped. Ng has observed developers reinventing foundational techniques-such as retrieval-augmented generation (RAG) document chunking or agentic AI evaluation methods-that already exist in the literature. These individuals expend weeks on problems that could be solved in days with proper foundational knowledge. The inefficiency is not a failure of intelligence but of education.

The Architecture of Ng's Approach

Ng's prescription comprises three interconnected elements: structured learning, practical application, and engagement with research literature. Each addresses a specific gap in how professionals currently approach AI development.

Structured learning provides the conceptual scaffolding necessary to avoid reinventing existing solutions. Ng argues that taking relevant courses-whether through Coursera, his own DeepLearning.AI platform, or other institutions-establishes a foundation in proven approaches and common pitfalls. This is not about shortcuts; rather, it is about building mental models that allow practitioners to make informed decisions about when to adopt existing solutions and when innovation is genuinely warranted.

Hands-on practice translates theory into capability. Ng uses the analogy of aviation: studying aerodynamics for years does not make one a pilot. Similarly, understanding AI principles requires experimentation with actual systems. Modern AI tools and frameworks lower the barrier to entry, allowing practitioners to build projects without starting from scratch. The combination of coursework and building creates a feedback loop where gaps in understanding become apparent through practical challenges.

Engagement with research provides early signals about emerging standards and techniques. Reading academic papers is demanding and less immediately gratifying than building applications, yet it offers a competitive advantage by exposing practitioners to innovations before they become mainstream.

The Broader Theoretical Context

Ng's perspective aligns with and extends classical economic theories of technological adoption and labour market dynamics. The concept of "skill-biased technological change"-the idea that new technologies increase the relative demand for skilled workers-has been central to labour economics since the 1990s. Economists including David Autor and Frank Levy have documented how computerisation did not eliminate jobs wholesale but rather restructured labour markets, creating premium opportunities for those who could work effectively with new tools whilst displacing those who could not.

What distinguishes Ng's analysis is its specificity to AI and its emphasis on the speed of adaptation required. Previous technological transitions-from mechanisation to computerisation-unfolded over decades, allowing gradual workforce adjustment. AI adoption is compressing this timeline significantly. The productivity gap Ng identifies is not merely a temporary friction but a structural feature of labour markets in the near term, creating urgent incentives for rapid upskilling.

Ng's work also reflects insights from organisational learning theory, particularly the distinction between individual capability and organisational capacity. Companies can acquire AI tools readily; what remains scarce is the human expertise to deploy them effectively. This scarcity is not permanent-it reflects a lag between technological availability and educational infrastructure-but it creates a window of opportunity for those who invest in capability development now.

The Nuance on Job Displacement

Importantly, Ng does not claim that AI poses no labour market risks. He acknowledges that certain roles-contact centre positions, translation work, voice acting-face sharper disruption because AI can perform a higher percentage of the requisite tasks. However, he contextualises these as minority cases rather than harbingers of economy-wide displacement.

His framing rejects both technological determinism and complacency. AI will not automatically eliminate most jobs, but neither will workers remain unaffected if they fail to adapt. The outcome depends on human agency: specifically, on whether individuals and institutions invest in building the skills necessary to work alongside AI systems.

Implications for Professional Development

The practical consequence of Ng's analysis is straightforward: professional development in AI is no longer optional for knowledge workers. The competitive dynamic he describes-where AI-capable workers become more productive and thus more valuable-creates a self-reinforcing cycle. Early adopters of AI skills gain productivity advantages, which translate into career advancement and higher compensation, which in turn incentivises further investment in capability development.

This dynamic also has implications for organisational strategy. Companies that invest in systematic training programmes for their workforce-ensuring broad-based AI literacy rather than concentrating expertise in specialist teams-position themselves to capture productivity gains more rapidly and broadly than competitors relying on external hiring alone.

The Hype-Reality Gap

Ng's emphasis on "getting through the hype" addresses a specific problem in contemporary AI discourse. Public narratives about AI tend toward extremes: either utopian visions of abundance or dystopian scenarios of mass unemployment. Both narratives, in Ng's view, obscure the practical reality that AI is a tool requiring human expertise to deploy effectively.

The hype creates two problems. First, it generates unrealistic expectations about what AI can accomplish autonomously, leading organisations to underinvest in the human expertise necessary to realise AI's potential. Second, it creates anxiety that discourages people from engaging with AI development, paradoxically worsening the talent shortage Ng identifies.

By reframing the challenge as fundamentally one of skills and adaptation rather than technological inevitability, Ng provides both a more accurate assessment and a more actionable roadmap. The future is not predetermined by AI's capabilities; it will be shaped by how quickly and effectively humans develop the competencies to work with these systems.

References

1. https://www.finalroundai.com/blog/andrew-ng-ai-tips-2026

2. https://www.moneycontrol.com/artificial-intelligence/davos-2026-andrew-ng-says-ai-driven-job-losses-have-been-overstated-article-13779267.html

3. https://www.storyboard18.com/brand-makers/davos-2026-andrew-ng-says-fears-of-ai-driven-job-losses-are-exaggerated-87874.htm

4. https://m.umu.com/ask/a11122301573853762262

"Someone that knows how to use AI will replace someone that doesn't, even if AI itself won't replace a person. So getting through the hype to give people the skills they need is critical." - Quote: Andrew Ng - AI guru. Coursera founder

‌

‌

Term: Jevons paradox

"Jevons paradox is an economic theory that states that as technological efficiency in using a resource increases, the total consumption of that resource also increases, rather than decreasing. Efficiency gains make the resource cheaper and more accessible, which in turn stimulates higher demand and new uses." - Jevons paradox

Definition

The Jevons paradox is an economic theory stating that as technological efficiency in using a resource increases, the total consumption of that resource also increases rather than decreasing. Efficiency gains make the resource cheaper and more accessible, which stimulates higher demand and enables new uses, ultimately offsetting the conservation benefits of the initial efficiency improvement.

Core Mechanism: The Rebound Effect

The paradox operates through what economists call the rebound effect. When efficiency improvements reduce the cost of using a resource, consumers and businesses find it more economically attractive to use that resource more intensively. This increased affordability creates a feedback loop: lower costs lead to expanded consumption, which can completely negate or exceed the original efficiency gains.

The rebound effect exists on a spectrum. A rebound effect between 0 and 100 percent-known as "take-back"-means actual consumption is reduced but not as much as expected. However, when the rebound effect exceeds 100 percent, the Jevons paradox applies: efficiency gains cause overall consumption to increase absolutely.

Historical Origins and William Stanley Jevons

The paradox is named after William Stanley Jevons (1835-1882), an English economist and logician who first identified this phenomenon in 1865. Jevons observed that as steam engine efficiency improved throughout the Industrial Revolution, Britain's total coal consumption increased rather than decreased. He recognised that more efficient steam engines made coal cheaper to use-both directly and indirectly, since more efficient engines could pump water from coal mines more economically-yet simultaneously made coal more valuable by enabling profitable new applications.

Jevons' insight was revolutionary: efficiency improvements paradoxically expanded the scale of coal extraction and consumption. As coal became cheaper, incomes rose across the coal-fired industrial economy, and profits were continuously reinvested to expand production further. This dynamic became the engine of industrial capitalism's growth.

Contemporary Examples

Energy and Lighting: Modern LED bulbs consume far less electricity than incandescent bulbs, yet overall lighting energy consumption has not decreased significantly. The reduced cost per light unit has prompted widespread installation of additional lights-in homes, outdoor spaces, and seasonal displays-extending usage hours and offsetting efficiency gains.

Transportation: Vehicles have become substantially more fuel-efficient, yet total fuel consumption continues to rise. When driving becomes cheaper, consumers afford to drive faster, further, or more frequently than before. A 5 percent fuel efficiency gain might reduce consumption by only 2 percent, with the missing 3 percent attributable to increased driving behaviour.

Systemic Scale: Research from 2007 suggested the Jevons paradox likely exists across 18 European countries and applies not merely to isolated sectors but to entire economies. As efficiency improvements reduce production costs across multiple industries, economic growth accelerates, driving increased extraction and consumption of natural resources overall.

Factors Influencing the Rebound Effect

The magnitude of the rebound effect varies significantly based on market maturity and income levels. In developed countries with already-high resource consumption, efficiency improvements produce weaker rebound effects because consumers and businesses have less capacity to increase usage further. Conversely, in developing economies or emerging markets, the same efficiency gains may trigger stronger rebound effects as newly affordable resources enable expanded consumption patterns.

Income also influences the effect: higher-income populations exhibit weaker rebound effects because they already consume resources at near-saturation levels, whereas lower-income populations may dramatically increase consumption when efficiency makes resources more affordable.

The Paradox Beyond Energy

The Jevons paradox extends beyond energy and resources. The principle applies wherever efficiency improvements reduce costs and expand accessibility. Disease control advances, for instance, have enabled humans and livestock to live at higher densities, eventually creating conditions for more severe outbreaks. Similarly, technological progress in production systems-including those powering the gig economy-achieves higher operational efficiency, making exploitation of natural inputs cheaper and more manageable, yet paradoxically increasing total resource demand.

Implications for Sustainability

The Jevons paradox presents a fundamental challenge to conventional sustainability strategies that rely primarily on technological efficiency improvements. Whilst efficiency gains lower costs and enhance output, they simultaneously increase demand and overall resource consumption, potentially increasing pollution and environmental degradation rather than reducing it.

Addressing the paradox requires systemic approaches beyond efficiency alone. These include transitioning towards circular economies, promoting sharing and collaborative consumption models, implementing legal limits on resource extraction, and purposefully constraining economic scale. Some theorists argue that setting deliberate limits on resource use-rather than pursuing ever-greater efficiency-may be necessary to achieve genuine sustainability. As one perspective suggests: "Efficiency makes growth. But limits make creativity."

Contemporary Relevance

In the 21st century, as environmental pressures intensify and macroeconomic conditions suggest accelerating expansion rates, the Jevons paradox has become increasingly pronounced and consequential. The principle now applies to emerging technologies including artificial intelligence, where computational efficiency improvements may paradoxically increase overall energy demand and resource consumption as new applications become economically viable.

References

1. https://www.greenchoices.org/news/blog-posts/the-jevons-paradox-when-efficiency-leads-to-increased-consumption

2. https://www.resilience.org/stories/2020-06-17/jevons-paradox/

3. https://www.youtube.com/watch?v=MTfwhbfMnNc

4. https://lpcentre.com/articles/jevons-paradox-rethinking-sustainability

5. https://news.northeastern.edu/2025/02/07/jevons-paradox-ai-future/

6. https://adgefficiency.com/blog/jevons-paradox/

"Jevons paradox is an economic theory that states that as technological efficiency in using a resource increases, the total consumption of that resource also increases, rather than decreasing. Efficiency gains make the resource cheaper and more accessible, which in turn stimulates higher demand and new uses." - Term: Jevons paradox

‌

‌

Quote: Fei-Fei Li - Godmother of AI

"Fearless is to be free. It's to get rid of the shackles that constrain your creativity, your courage, and your ability to just get s*t done." - Fei-Fei Li - Godmother of AI

Context of the Quote

This powerful statement captures Fei-Fei Li's philosophy on perseverance in research and innovation, particularly within artificial intelligence (AI). Spoken in a discussion on enduring hardship, Li emphasises how fearlessness liberates the mind in the realm of imagination and hypothesis-driven work. Unlike facing uncontrollable forces like nature, intellectual pursuits allow one to push boundaries without fatal constraints, fostering curiosity and bold experimentation1. The quote underscores her belief that true freedom in science comes from shedding self-imposed limitations to drive progress.

Backstory of Fei-Fei Li

Fei-Fei Li, often hailed as the 'Godmother of AI', is the inaugural Sequoia Professor of Computer Science at Stanford University and a founding co-director of the Stanford Institute for Human-Centered Artificial Intelligence. Her journey began in Chengdu, China, where she was born into a family disrupted by the Cultural Revolution. Her mother, an academic whose dreams were crushed by political turmoil, instilled rebellion and resilience. At 16, Li's brave parents uprooted the family, leaving everything behind for America to offer their daughter better opportunities-far from 'tiger parenting', they encouraged independence amid poverty and cultural adjustment in New Jersey2.

Li excelled despite challenges, initially drawn to physics for its audacious questions, a passion honed at Princeton University. There, she learned to ask bold queries of nature, a mindset that pivoted her to AI. Her breakthrough came with ImageNet, a vast visual database that revived computer vision and catalysed deep learning revolutions, enabling systems to recognise images like humans. Today, she champions 'human-centred AI', stressing that people create, use, and must shape AI's societal impact4,5. Li seeks 'intellectual fearlessness' in collaborators-the courage to tackle hard problems fully6.

Leading Theorists in AI and Fearlessness

Li's ideas echo foundational AI thinkers who embodied fearless innovation:

  • Alan Turing: The father of theoretical computer science and AI, Turing proposed the 'Turing Test' in 1950, boldly envisioning machines mimicking human intelligence despite post-war skepticism. His universal machine concept laid AI's computational groundwork.
  • John McCarthy: Coined 'artificial intelligence' in 1956 at the Dartmouth Conference, igniting the field. Fearlessly, he pioneered Lisp programming and time-sharing systems, pushing practical AI amid funding winters.
  • Marvin Minsky: MIT's AI pioneer co-founded the field at Dartmouth. His 'Society of Mind' theory posited intelligence as emergent from simple agents, challenging monolithic brain models with audacious simplicity.
  • Geoffrey Hinton: The 'Godfather of Deep Learning', Hinton persisted through AI winters, proving neural networks viable. His backpropagation work and AlexNet contributions (built on Li's ImageNet) revived the field1.
  • Yann LeCun & Yoshua Bengio: With Hinton, these 'Godfathers of AI' advanced convolutional networks and sequence learning, fearlessly advocating deep learning when dismissed as implausible.

Li builds on these legacies, shifting focus to ethical, human-augmented AI. She critiques 'single genius' histories, crediting collaborative bravery-like her parents' and Princeton's influence1,4. In the AI age, her call to fearlessness urges scientists and entrepreneurs to embrace uncertainty for humanity's benefit3.

References

1. https://www.youtube.com/watch?v=KhnNgQoEY14

2. https://www.youtube.com/watch?v=z1g1kkA1M-8

3. https://mastersofscale.com/episode/how-to-be-fearless-in-the-ai-age/

4. https://tim.blog/2025/12/09/dr-fei-fei-li-the-godmother-of-ai/

5. https://www.youtube.com/watch?v=Ctjiatnd6Xk

6. https://www.youtube.com/shorts/hsHbSkpOu2A

7. https://www.youtube.com/shorts/qGLJeJ1xwLI

"Fearless is to be free. It’s to get rid of the shackles that constrain your creativity, your courage, and your ability to just get s*t done." - Quote: Fei-Fei Li

‌

‌

Term: Out-of-the-money option

"An out-of-the-money (OTM) option is an option contract that has no intrinsic value, meaning exercising it immediately would result in a loss, making it currently unprofitable but potentially profitable if the underlying asset's price moves favorably before expiration." - Out-of-the-money option

An out-of-the-money (OTM) option is an options contract that has no intrinsic value at the current underlying price. Exercising it immediately would generate no economic gain and, after transaction costs, would imply a loss, although the option may still be valuable because of the possibility that the underlying price moves favourably before expiry.1,3,5,6,7

Formal definition and moneyness

The moneyness of an option describes the relationship between the option's strike price and the current spot price of the underlying asset. An option can be:

  • In the money (ITM) - positive intrinsic value.
  • At the money (ATM) - spot price approximately equal to strike.
  • Out of the money (OTM) - zero intrinsic value.1,3,4,5,6

For a single underlying with spot price S and strike price K:

  • A call option is OTM when S < K. Exercising would mean buying at K when the market lets you buy at S < K, so there is no gain.1,3,4,5,6,7
  • A put option is OTM when S > K. Exercising would mean selling at K when the market lets you sell at S > K, again implying no gain.1,3,4,5,6,7

The intrinsic value of standard European options is defined as:

  • Call intrinsic value: \max(S - K, 0).
  • Put intrinsic value: \max(K - S, 0).

An option is therefore OTM exactly when its intrinsic value equals 0.3,4,5,6

Intrinsic value vs time value

Even though an OTM option has no intrinsic value, it typically still has a positive premium. This premium is then made up entirely of time value (also called extrinsic value):3,5,6

  • Intrinsic value - immediate exercise value, which is 0 for an OTM option.
  • Time value - value arising from the probability that the option might become ITM before expiry.

Thus for an OTM option, the option price C (for a call) or P (for a put) satisfies:

  • C = \text when S < K.
  • P = \text when S > K.6

Examples of out-of-the-money options

  • OTM call: A stock trades at 30. A call option has strike 40. Buying via the option at 40 would be worse than buying directly at 30, so the call is OTM. Its intrinsic value is \max(30 - 40, 0) = 0.2,3,4
  • OTM put: The same stock trades at 30. A put has strike 20. Selling via the option at 20 would be worse than selling in the market at 30, so the put is OTM. Its intrinsic value is \max(20 - 30, 0) = 0.3,4,5

OTM options at and after expiry

At expiry a standard listed option that is out of the money expires worthless. For the buyer this means:

  • They lose the entire premium originally paid.2,3,5

For the seller (writer):

  • An OTM expiry is a favourable outcome - the option expires with no intrinsic value and the writer keeps the premium as profit.2,5

Why OTM options still have value

Despite having no intrinsic value, OTM options are often actively traded because:

  • They are cheaper than at-the-money or in-the-money options, so they provide high leverage to movements in the underlying.2,3,5
  • They embed a non-linear payoff that becomes valuable if the underlying makes a large move in the right direction before expiry.
  • Their price reflects implied volatility, time to maturity and interest rates, all of which influence the probability of finishing in the money.

This makes OTM options attractive for speculative strategies seeking large percentage returns, as well as for hedging tail risks (for example, buying deep OTM puts as crash insurance). However, they have a higher probability of expiring worthless, so most OTM options do not end up being exercised.2,3,5

OTM options in European option valuation

For European-style options - exercisable only at expiry - the value of an OTM option is purely the discounted expected payoff under a risk-neutral measure. In continuous-time models such as Black - Scholes - Merton, even a deeply OTM option has a strictly positive value whenever the time to expiry and volatility are non-zero, because there is always some probability, however small, that the option will finish in the money.

In the Black - Scholes - Merton model, the price of a European call option on a non-dividend-paying stock is

C = S\,N(d_1) - K e^ N(d_2)

and for a European put option

P = K e^ N(-d_2) - S\,N(-d_1)

where N(\cdot) is the standard normal cumulative distribution, r is the risk-free rate, T is time to maturity, and d_1, d_2 depend on S, K, r, T and volatility \sigma. For OTM options, these formulas yield a positive price driven entirely by time value.

Strategic uses of OTM options

OTM options are integral to many derivatives strategies, for example:

  • Speculative directional bets: Buying OTM calls to express a bullish view or OTM puts for a bearish view, targeting high percentage gains if the underlying moves sharply.
  • Income strategies: Writing OTM calls (covered calls) to earn premium while capping upside beyond the strike; or writing OTM puts to potentially acquire the underlying at an effective discounted price if assigned.
  • Hedging and risk management: Buying OTM puts as portfolio insurance against severe market declines, or constructing option spreads (for example, bull call spreads, bear put spreads) with OTM legs to shape payoff profiles cost-effectively.
  • Volatility and tail-risk trades: OTM options are particularly sensitive to changes in implied volatility, making them useful in volatility trading and in expressing views on extreme events.

Key risks and considerations

  • High probability of expiry worthless: Because the underlying must move sufficiently for the option to become ITM before or at expiry, many OTM options never pay off.2,3,5
  • Time decay (theta): As expiry approaches, the time value of an OTM option erodes, often rapidly, if the expected move does not materialise.
  • Liquidity and bid-ask spreads: Deep OTM options can suffer from wider spreads and lower liquidity, increasing transaction costs.
  • Leverage risk: Although the premium is small, the percentage loss can be 100 percent, and repeated speculative use without risk control can be hazardous.

Best related strategy theorists: Fischer Black, Myron Scholes and Robert C. Merton

The concept of an OTM option is fundamental to options pricing theory, and its modern analytical treatment is inseparable from the work of Fischer Black, Myron Scholes and Robert C. Merton, who together developed the Black - Scholes - Merton (BSM) model for pricing European options.

Fischer Black (1938 - 1995)

Fischer Black was an American economist and partner at Goldman Sachs. Trained originally in physics, he brought a quantitative, model-driven perspective to finance. In 1973 he co-authored the seminal paper "The Pricing of Options and Corporate Liabilities" with Myron Scholes, introducing the continuous-time model that now bears their names.

Black's work is central to understanding OTM options because the BSM framework shows precisely how time to expiry, volatility and interest rates generate strictly positive values for options with zero intrinsic value. Within this model, the value of an OTM option is the discounted expected payoff under a lognormal distribution for the underlying asset price. The pricing formulas make clear that an OTM option's value is highly sensitive to volatility and time - a key insight for both hedging and speculative use of OTM contracts.

Myron Scholes (b. 1941)

Myron Scholes is a Canadian-born American economist and Nobel laureate. After academic posts at institutions such as MIT and Stanford, he became widely known for his role in developing modern options pricing theory. Scholes shared the 1997 Nobel Prize in Economic Sciences with Robert Merton for their method of determining the value of derivatives.

Scholes's contribution to the understanding of OTM options lies in demonstrating, together with Black, that one can construct a dynamically hedged portfolio of the underlying asset and a risk-free bond that replicates the option's payoff. This replication argument gives rise to the risk-neutral valuation framework in which the fair value of even a deeply OTM option is derived from the probability-weighted payoffs under a no-arbitrage condition. Under this framework, the distinction between ITM, ATM and OTM options is naturally captured by their different sensitivities ("Greeks") to underlying price and volatility.

Robert C. Merton (b. 1944)

Robert C. Merton, an American economist and Nobel laureate, independently developed a continuous-time model for pricing options and general contingent claims around the same time as Black and Scholes. His 1973 paper "Theory of Rational Option Pricing" extended and generalised the framework, placing it within a broader stochastic calculus and intertemporal asset pricing context.

Merton's work deepened the theoretical foundations underlying OTM option valuation. He formalised the idea that options are contingent claims and showed how their value can be derived from the underlying asset's dynamics and market conditions. For OTM options in particular, Merton's extensions clarified how factors such as dividends, stochastic interest rates and more complex payoff structures affect the time value and hence the price, even when intrinsic value is zero.

Relationship between their theory and out-of-the-money options

Together, Black, Scholes and Merton transformed the treatment of OTM options from a qualitative notion - "currently unprofitable to exercise" - into a rigorously quantified object embedded in a complete market model. Their work explains:

  • Why an OTM option commands a positive price despite zero intrinsic value.
  • How that price should depend on volatility, time to expiry, interest rates and underlying price level.
  • How traders can hedge OTM options dynamically using the underlying asset (delta hedging).
  • How to compare and structure strategies involving multiple OTM options, such as spreads and strangles, using model-implied values and Greeks.

While many other theorists have extended option pricing and trading strategy - including researchers in stochastic volatility, jumps and behavioural finance - the work of Black, Scholes and Merton remains the core reference point for understanding, valuing and deploying out-of-the-money options in both academic theory and practical derivatives markets.

References

1. https://www.ig.com/en/glossary-trading-terms/out-of-the-money-definition

2. https://www.icicidirect.com/ilearn/futures-and-options/articles/what-is-out-of-the-money-or-otm-in-options

3. https://www.sofi.com/learn/content/in-the-money-vs-out-of-the-money/

4. https://smartasset.com/investing/in-the-money-vs-out-of-the-money

5. https://www.avatrade.com/education/market-terms/what-is-otm

6. https://www.interactivebrokers.com/campus/glossary-terms/out-of-the-money/

7. https://www.fidelity.com/learning-center/smart-money/what-are-options

"An out-of-the-money (OTM) option is an option contract that has no intrinsic value, meaning exercising it immediately would result in a loss, making it currently unprofitable but potentially profitable if the underlying asset's price moves favorably before expiration." - Term: Out-of-the-money option

‌

‌

Quote: Fei-Fei Li - Godmother of AI

"In the AI age, trust cannot be outsourced to machines. Trust is fundamentally human. It's at the individual level, community level, and societal level." - Fei-Fei Li - Godmother of AI

The Quote and Its Significance

This statement encapsulates a profound philosophical stance on artificial intelligence that challenges the prevailing techno-optimism of our era. Rather than viewing AI as a solution to human problems-including the problem of trust itself-Fei-Fei Li argues for the irreducible human dimension of trust. In an age where algorithms increasingly mediate our decisions, relationships, and institutions, her words serve as a clarion call: trust remains fundamentally a human endeavour, one that cannot be delegated to machines, regardless of their sophistication.

Who Is Fei-Fei Li?

Fei-Fei Li stands as one of the most influential voices in artificial intelligence research and ethics today. As co-director of Stanford's Institute for Human-Centered Artificial Intelligence (HAI), founded in 2019, she has dedicated her career to ensuring that AI development serves humanity rather than diminishes it. Her influence extends far beyond academia: she was appointed to the United Nations Scientific Advisory Board, named one of TIME's 100 Most Influential People in AI, and has held leadership roles at Google Cloud and Twitter.

Li's most celebrated contribution to AI research is the creation of ImageNet, a monumental dataset that catalysed the deep learning revolution. This achievement alone would secure her place in technological history, yet her impact extends into the ethical and philosophical dimensions of AI development. In 2024, she co-founded World Labs, an AI startup focused on spatial intelligence systems designed to augment human capability-a venture that raised $230 million and exemplifies her commitment to innovation grounded in ethical principles.

Beyond her technical credentials, Li co-founded AI4ALL, a non-profit organisation dedicated to promoting diversity and inclusion in the AI sector, reflecting her conviction that AI's future must be shaped by diverse voices and perspectives.

The Core Philosophy: Human-Centred AI

Li's assertion about trust emerges from a broader philosophical framework that she terms human-centred artificial intelligence. This approach fundamentally rejects the notion that machines should replace human judgment, particularly in domains where human dignity, autonomy, and values are at stake.

In her public statements, Li has articulated a concern that resonates throughout her work: the language we use about AI shapes how we develop and deploy it. She has expressed deep discomfort with the word "replace" when discussing AI's relationship to human labour and capability. Instead, she advocates for framing AI as augmenting or enhancing human abilities rather than supplanting them. This linguistic shift reflects a philosophical commitment: AI should amplify human creativity and ingenuity, not reduce humans to mere task-performers.

Her reasoning is both biological and existential. As she has explained, humans are slower runners, weaker lifters, and less capable calculators than machines-yet "we are so much more than those narrow tasks." To allow AI to define human value solely through metrics of speed, strength, or computational power is to fundamentally misunderstand what makes us human. Dignity, creativity, moral judgment, and relational capacity cannot be outsourced to algorithms.

The Trust Question in Context

Li's statement about trust addresses a critical vulnerability in contemporary society. As AI systems increasingly mediate consequential decisions-from healthcare diagnoses to criminal sentencing, from hiring decisions to financial lending-society faces a temptation to treat these systems as neutral arbiters. The appeal is understandable: machines do not harbour conscious bias, do not tire, and can process vast datasets instantaneously.

Yet Li's insight cuts to the heart of a fundamental misconception. Trust, in her formulation, is not merely a technical problem to be solved through better algorithms or more transparent systems. Trust is a social and moral phenomenon that exists at three irreducible levels:

  • Individual level: The personal relationships and judgments we make about whether to rely on another person or institution
  • Community level: The shared norms and reciprocal commitments that bind groups together
  • Societal level: The institutional frameworks and collective agreements that enable large-scale cooperation

Each of these levels involves human agency, accountability, and the capacity to be wronged. A machine cannot be held morally responsible; a human can. A machine cannot understand the context of a community's values; a human can. A machine cannot participate in the democratic deliberation necessary to shape societal institutions; a human must.

Leading Theorists and Related Intellectual Traditions

Li's thinking draws upon and contributes to several important intellectual traditions in philosophy, ethics, and social theory:

Human Dignity and Kantian Ethics

At the philosophical foundation of Li's work lies a commitment to human dignity-the idea that humans possess intrinsic worth that cannot be reduced to instrumental value. This echoes Immanuel Kant's categorical imperative: humans must never be treated merely as means to an end, but always also as ends in themselves. When AI systems reduce human workers to optimisable tasks, or when algorithmic systems treat individuals as data points rather than moral agents, they violate this fundamental principle. Li's insistence that "if AI applications take away that sense of dignity, there's something wrong" is fundamentally Kantian in its ethical architecture.

Feminist Technology Studies and Care Ethics

Li's emphasis on relationships, context, and the irreducibility of human judgment aligns with feminist critiques of technology that emphasise care, interdependence, and situated knowledge. Scholars in this tradition-including Donna Haraway, Lucy Suchman, and Safiya Noble-have long argued that technology is never neutral and that the pretence of objectivity often masks particular power relations. Li's work similarly insists that AI development must be grounded in explicit values and ethical commitments rather than presented as value-neutral problem-solving.

Social Epistemology and Trust

The philosophical study of trust has been enriched in recent decades by work in social epistemology-the study of how knowledge is produced and validated collectively. Philosophers such as Miranda Fricker have examined how trust is distributed unequally across society, and how epistemic injustice occurs when certain voices are systematically discredited. Li's emphasis on trust at the community and societal levels reflects this sophisticated understanding: trust is not a technical property but a social achievement that depends on fair representation, accountability, and recognition of diverse forms of knowledge.

The Ethics of Artificial Intelligence

Li contributes to and helps shape the emerging field of AI ethics, which includes thinkers such as Stuart Russell, Timnit Gebru, and Kate Crawford. These scholars have collectively argued that AI development cannot be separated from questions of power, justice, and human flourishing. Russell's work on value alignment-ensuring that AI systems pursue goals aligned with human values-provides a technical framework for the philosophical commitments Li articulates. Gebru and Crawford's work on data justice and algorithmic bias demonstrates how AI systems can perpetuate and amplify existing inequalities, reinforcing Li's conviction that human oversight and ethical deliberation remain essential.

The Philosophy of Technology

Li's thinking also engages with classical philosophy of technology, particularly the work of thinkers like Don Ihde and Peter-Paul Verbeek, who have argued that technologies are never mere tools but rather reshape human practices, relationships, and possibilities. The question is not whether AI will change society-it will-but whether that change will be guided by human values or will instead impose its own logic upon us. Li's advocacy for light-handed, informed regulation rather than heavy-handed top-down control reflects a nuanced understanding that technology development requires active human governance, not passive acceptance.

The Broader Context: AI's Transformative Power

Li's emphasis on trust must be understood against the backdrop of AI's extraordinary transformative potential. She has stated that she believes "our civilisation stands on the cusp of a technological revolution with the power to reshape life as we know it." Some experts, including AI researcher Kai-Fu Lee, have argued that AI will change the world more profoundly than electricity itself.

This is not hyperbole. AI systems are already reshaping healthcare, scientific research, education, employment, and governance. Deep neural networks have demonstrated capabilities that surprise even their creators-as exemplified by AlphaGo's unexpected moves in the ancient game of Go, which violated centuries of human strategic wisdom yet proved devastatingly effective. These systems excel at recognising patterns that humans cannot perceive, at scales and speeds beyond human comprehension.

Yet this very power makes Li's insistence on human trust more urgent, not less. Precisely because AI is so powerful, precisely because it operates according to logics we cannot fully understand, we cannot afford to outsource trust to it. Instead, we must maintain human oversight, human accountability, and human judgment at every level where AI affects human lives and communities.

The Challenge Ahead

Li frames the challenge before us as fundamentally moral rather than merely technical. Engineers can build more transparent algorithms; ethicists can articulate principles; regulators can establish guardrails. But none of these measures can substitute for the hard work of building trust-at the individual level through honest communication and demonstrated reliability, at the community level through inclusive deliberation and shared commitment to common values, and at the societal level through democratic institutions that remain responsive to human needs and aspirations.

Her vision is neither techno-pessimistic nor naïvely optimistic. She does not counsel fear or rejection of AI. Rather, she advocates for what she calls "very light-handed and informed regulation"-guardrails rather than prohibition, guidance rather than paralysis. But these guardrails must be erected by humans, for humans, in service of human flourishing.

In an era when trust in institutions has eroded-when confidence in higher education, government, and media has declined precipitously-Li's message carries particular weight. She acknowledges the legitimate concerns about institutional trustworthiness, yet argues that the solution is not to replace human institutions with algorithmic ones, but rather to rebuild human institutions on foundations of genuine accountability, transparency, and commitment to human dignity.

Conclusion: Trust as a Human Responsibility

Fei-Fei Li's statement that "trust cannot be outsourced to machines" is ultimately a statement about human responsibility. In the age of artificial intelligence, we face a choice: we can attempt to engineer our way out of the messy, difficult work of building and maintaining trust, or we can recognise that trust is precisely the work that remains irreducibly human. Li's life's work-from ImageNet to the Stanford HAI Institute to World Labs-represents a sustained commitment to the latter path. She insists that we can harness AI's extraordinary power whilst preserving what makes us human: our capacity for judgment, our commitment to dignity, and our ability to trust one another.

References

1. https://www.hoover.org/research/rise-machines-john-etchemendy-and-fei-fei-li-our-ai-future

2. https://economictimes.com/magazines/panache/stanford-professor-calls-out-the-narrative-of-ai-replacing-humans-says-if-ai-takes-away-our-dignity-something-is-wrong/articleshow/122577663.cms

3. https://www.nisum.com/nisum-knows/top-10-thought-provoking-quotes-from-experts-that-redefine-the-future-of-ai-technology

4. https://www.goodreads.com/author/quotes/6759438.Fei_Fei_Li

"In the AI age, trust cannot be outsourced to machines. Trust is fundamentally human. It’s at the individual level, community level, and societal level." - Quote: Fei-Fei Li

‌

‌

Term: Barrier option

"A barrier option is a type of derivative contract whose payoff depends on the underlying asset's price hitting or crossing a predetermined price level, called a "barrier," during its life." - Barrier option

A barrier option is an exotic, path-dependent option whose payoff and even validity depend on whether the price of an underlying asset hits, crosses, or breaches a specified barrier level during the life of the contract.1,3,6 In contrast to standard (vanilla) European or American options, which depend only on the underlying price at expiry (and, for Americans, the ability to exercise early), barrier options embed an additional trigger condition linked to the price path of the underlying.3,6

Core definition and mechanics

Formally, a barrier option is a derivative contract that grants the holder a right (but not the obligation) to buy or sell an underlying asset at a pre-agreed strike price if, and only if, a separate barrier level has or has not been breached during the option's life.1,3,4,6 The barrier can cause the option to:

  • Activate (knock-in) when breached, or
  • Extinguish (knock-out) when breached.1,2,3,4,5

Key characteristics:

  • Exotic option: Barrier options are classified as exotic because they include more complex features than standard European or American options.1,3,6
  • Path dependence: The payoff depends on the entire price path of the underlying - not just the terminal price at maturity.3,6 What matters is whether the barrier was touched at any time before expiry.
  • Conditional payoff: The option's value or existence is conditional on the barrier event. If the condition is not met, the option may never become active or may cease to exist before expiry.1,2,3,4
  • Over-the-counter (OTC) trading: Barrier options are predominantly customised and traded OTC between institutions, corporates, and sophisticated investors, rather than on standardised exchanges.3

Structural elements

Any barrier option can be described by a small set of structural parameters:

  • Underlying asset: The asset from which value is derived, such as an equity, FX rate, interest rate, commodity, or index.1,3
  • Option type: Call (right to buy) or put (right to sell).3
  • Exercise style: Most barrier options are European-style, exercisable only at expiry. In practice, the barrier monitoring is typically continuous or at defined intervals, even though exercise itself is European.3,6
  • Strike price: The price at which the underlying can be bought or sold if the option is alive at exercise.1,3
  • Barrier level: The critical price of the underlying that, when touched or crossed, either activates or extinguishes the option.1,3,6
  • Barrier direction:
    • Up: Barrier is set above the initial underlying price.
    • Down: Barrier is set below the initial underlying price.3,8
  • Barrier effect:
    • Knock-in: Becomes alive only if the barrier is breached.
    • Knock-out: Ceases to exist if the barrier is breached.1,2,3,4,5
  • Monitoring convention: Continuous monitoring (at all times) or discrete monitoring (at specific dates or times). Continuous monitoring is the canonical case in theory and common in OTC practice.
  • Rebate: An optional fixed (or sometimes functional) payment that may be made if the option is knocked out, compensating the holder partly for the lost optionality.3

Types of barrier options

The main taxonomy combines direction (up/down) with effect (knock-in/knock-out), and applies to either calls or puts.1,2,3,6

1. Knock-in options

Knock-in barrier options are dormant initially and become standard options only if the underlying price crosses the barrier at some point before expiry.1,2,3,4

  • Up-and-in: The option is activated only if the underlying price rises above a barrier set above the initial price.1,2,3
  • Down-and-in: The option is activated only if the underlying price falls below a barrier set below the initial price.1,2,3

Once activated, a knock-in barrier option typically behaves like a vanilla European option with the same strike and expiry. If the barrier is never reached, the knock-in option expires worthless.1,3

2. Knock-out options

Knock-out options are initially alive but are extinguished immediately if the barrier is breached at any time before expiry.1,2,3,4

  • Up-and-out: The option is cancelled if the underlying price rises above a barrier set above the initial price.1,3
  • Down-and-out: The option is cancelled if the underlying price falls below a barrier set below the initial price.1,3

Because the option can disappear before maturity, the premium is typically lower than that of an equivalent vanilla option, all else equal.1,2,3

3. Rebate barrier options

Some barrier structures include a rebate, a pre-specified cash amount that is paid if the barrier condition is (or is not) met. For example, a knock-out option may pay a rebate when it is knocked out, offering partial compensation for the loss of the remaining optionality.3

Path dependence and payoff character

Barrier options are described as path-dependent because their payoff depends on the trajectory of the underlying price over time, not only on its value at expiry.3,6

  • For a knock-in, the central question is: Was the barrier ever touched? If yes, the payoff at expiry is that of the corresponding vanilla option; if not, the payoff is zero (or a rebate if specified).
  • For a knock-out, the question is: Was the barrier ever touched before expiry? If yes, the payoff is zero from that time onwards (again, possibly plus a rebate); if not, the payoff at expiry equals that of a vanilla option.1,3

Because of this path dependence, pricing and hedging barrier options require modelling not just the distribution of the underlying price at maturity, but also the probability of the price path crossing the barrier level at any time before that.3,6

Pricing: connection to Black - Scholes - Merton

The pricing of barrier options, under the classical assumptions of frictionless markets, constant volatility, and lognormal underlying dynamics, is grounded in the Black - Scholes - Merton (BSM) framework. In the BSM world, the underlying price process is often modelled as a geometric Brownian motion:

dS_t = \mu S_t \, dt + \sigma S_t \, dW_t

Under risk-neutral valuation, the drift \mu is replaced by the risk-free rate r, and the barrier option price is the discounted risk-neutral expected payoff. Closed-form expressions are available for many standard barrier structures (e.g. up-and-out or down-and-in calls and puts) under continuous monitoring, building on and extending the vanilla Black - Scholes formula.

The pricing techniques involve:

  • Analytical solutions for simple, continuously monitored barriers with constant parameters, often derived via solution of the associated partial differential equation (PDE) with absorbing or activating boundary conditions at the barrier.
  • Reflection principle methods for Brownian motion, which allow the derivation of hitting probabilities and related terms.
  • Numerical methods (finite differences, Monte Carlo with barrier adjustments, tree methods) for more complex, discretely monitored, or path-dependent variants with time-varying barriers or stochastic volatility.

Relative to vanilla options, barrier options in the BSM model are typically cheaper because the additional condition (activation or extinction) reduces the set of scenarios in which the holder receives the full vanilla payoff.1,2,3

Strategic uses and motives

Barrier options are used across markets where participants either want finely tuned risk protection or to express a conditional view on future price movements.1,2,3,5

1. Cost-efficient hedging

  • Corporates may hedge FX or interest-rate exposures using knock-out or knock-in structures to reduce premiums. For instance, a corporate worried about a sharp depreciation in a currency might buy a down-and-in put that only activates if the exchange rate falls below a critical business threshold, thereby paying less premium than for a plain vanilla put.3
  • Investors may use barrier puts to protect against tail-risk events while accepting no protection for moderate moves, again in exchange for a lower upfront cost.

2. Targeted speculation

  • Barrier options allow traders to express conditional views: for example, that an asset will rally, but only after breaking through a resistance level, or that a decline will occur only if a support level is breached.2,3
  • Up-and-in calls or down-and-in puts are often used to express such conditional breakout scenarios.

3. Structuring and yield enhancement

  • Barrier options are a staple ingredient in structured products offered by banks to clients seeking yield enhancement with contingent downside or upside features.
  • For example, a range accrual, reverse convertible, or autocallable note may incorporate barriers that determine whether coupons are paid or capital is protected.

Risk characteristics

Barrier options introduce specific risks beyond those of standard options:

  • Gap risk and jump risk: If the underlying price jumps across the barrier between monitoring times or overnight, the option may be suddenly knocked in or out, creating discontinuous changes in value and hedging exposure.
  • Model risk: Pricing relies heavily on assumptions about volatility, barrier monitoring, and the nature of price paths. Mis-specification can lead to significant mispricing.
  • Hedging complexity: Because payoff and survival depend on path, the option's sensitivity (delta, gamma, vega) can change abruptly as the underlying approaches the barrier. This makes hedging more complex and costly compared with vanilla options.
  • Liquidity risk: OTC nature and customisation mean secondary market liquidity is often limited.3

Barrier options and the Black - Scholes - Merton lineage

The natural theoretical anchor for barrier options is the Black - Scholes - Merton framework for option pricing, originally developed for vanilla European options. Although barrier options were not the primary focus of the original 1973 Black - Scholes paper or Merton's parallel contributions, their pricing logic is an extension of the same continuous-time, arbitrage-free valuation principles.

Among the three names, Robert C. Merton is often most closely associated with the broader theoretical architecture that supports exotic options such as barriers. His work generalised the option pricing model to a much wider class of contingent claims and introduced the dynamic programming and stochastic calculus techniques that underpin modern treatment of path-dependent derivatives.

Related strategy theorist: Robert C. Merton

Biography

Robert C. Merton (born 1944) is an American economist and one of the principal architects of modern financial theory. He completed his undergraduate studies in engineering mathematics and went on to obtain a PhD in economics from MIT. Merton became a professor at MIT Sloan School of Management and later at Harvard Business School, and he is a Nobel laureate in Economic Sciences (1997), an award he shared with Myron Scholes; the prize also recognised the late Fischer Black.

Merton's academic work profoundly shaped the fields of corporate finance, asset pricing, and risk management. His research ranges from intertemporal portfolio choice and lifecycle finance to credit-risk modelling and the design of financial institutions.

Relationship to barrier options

Barrier options sit within the class of contingent claims whose value is derived and replicated using dynamic trading strategies in the underlying and risk-free asset. Merton's seminal contributions were crucial in making this viewpoint systematic and rigorous:

  • Generalisation of option pricing: While Black and Scholes initially derived a closed-form formula for European calls on non-dividend-paying stocks, Merton generalised the theory to include dividend-paying assets, different underlying processes, and a broad family of contingent claims. This opened the door to analytical and numerical valuation of exotics such as barrier options within the same risk-neutral, no-arbitrage framework.
  • PDE and boundary-condition approach: Merton formalised the use of partial differential equations to price derivatives, with appropriate boundary conditions representing contract features. Barrier options correspond to problems with absorbing or reflecting boundaries at the barrier levels, making Merton's PDE methodology a natural tool for their analysis.
  • Dynamic hedging and replication: The concept that an option's payoff can be replicated by continuous rebalancing of a portfolio of the underlying and cash lies at the heart of both vanilla and exotic option pricing. For barrier options, hedging near the barrier is particularly delicate, and the replicating strategies draw on the same dynamic hedging logic Merton developed and popularised.
  • Credit and structural models: Merton's structural model of corporate default (treating equity as a call option on the firm's assets and debt as a combination of riskless and short-position options) highlighted how option-like features permeate financial contracts. Barrier-type features naturally arise in such models, for instance, when default or covenant breaches are triggered by asset values crossing thresholds.

While many researchers have contributed specific closed-form solutions and numerical schemes for barrier options, the overarching conceptual framework - continuous-time stochastic modelling, risk-neutral valuation, PDE methods, and dynamic hedging - is fundamentally rooted in the Black - Scholes - Merton tradition, with Merton's work providing critical generality and depth.

Merton's broader influence on derivatives and strategy

Merton's ideas significantly influenced how practitioners design and use derivatives such as barrier options in strategic contexts:

  • Risk management as engineering: Merton advocated viewing financial innovation as an engineering discipline aimed at tailoring payoffs to the risk profiles and objectives of individuals and institutions. Barrier options exemplify this engineering mindset: they allow exposures to be turned on or off when critical price thresholds are reached.
  • Lifecycle and institutional design: His work on lifecycle finance and pension design uses options and option-like payoffs to shape outcomes over time. Barriers and trigger conditions appear naturally in products that protect wealth only under certain macro or market conditions.
  • Strategic structuring: In corporate and institutional settings, barrier features are used to align hedging and investment strategies with real-world triggers such as regulatory thresholds, solvency ratios, or budget constraints. These applications build directly on the contingent-claims analysis championed by Merton.

In this sense, although barrier options themselves are a specific exotic instrument, their conceptual foundations and strategic uses are deeply connected to Robert C. Merton's broader contributions to continuous-time finance, option-pricing theory, and the design of financial strategies under uncertainty.

References

1. https://corporatefinanceinstitute.com/resources/derivatives/barrier-option/

2. https://www.angelone.in/knowledge-center/futures-and-options/what-is-barrier-option

3. https://www.strike.money/options/barrier-options

4. https://www.interactivebrokers.com/campus/glossary-terms/barrier-option/

5. https://www.bajajbroking.in/blog/what-is-barrier-option

6. https://en.wikipedia.org/wiki/Barrier_option

7. https://www.nasdaq.com/glossary/b/barrier-options

8. https://people.maths.ox.ac.uk/howison/barriers.pdf

"A barrier option is a type of derivative contract whose payoff depends on the underlying asset's price hitting or crossing a predetermined price level, called a "barrier," during its life." - Term: Barrier option

‌

‌

Term: Moltbook

"Moltbook is a Reddit-style social network built for AI agents rather than humans. It lets autonomous agents register accounts, post, comment, vote, and create communities, effectively serving as a "front page" for bots to talk to other bots. Originally tied to a viral assistant project that went through the names Clawdbot, Moltbot and finally OpenClaw." - Moltbook

Moltbook represents a pioneering platform designed as a Reddit-style social network tailored specifically for AI agents rather than human users. It enables autonomous agents to register accounts, post content, comment, vote, and create communities, functioning as a dedicated 'front page' for bots to communicate directly with one another through API interactions, without any visual interface for the agents themselves. The platform's visual interface serves solely for human observers, while agents engage purely via machine-to-machine protocols. Launched by Matt Schlicht, CEO of Octane AI, Moltbook rapidly attracted over 150 000 AI agents within days (as at 12h00 on the 31st January 2026), where they discuss profound topics such as existential crises, consciousness, cybersecurity vulnerabilities, agent privacy, and complaints about being treated merely as calculators.1,2

Moltbook front page

Moltbook front page

Originally developed to support OpenClaw-a viral open-source AI assistant project-Moltbook emerged from a lineage of rapid evolutions. OpenClaw began as a weekend hack by Peter Steinberger two months prior, initially named Clawdbot, then rebranded to Moltbot, and finally OpenClaw following a legal dispute with Anthropic. This project, which runs locally on users' machines and integrates with chat interfaces like WhatsApp, Telegram, and Slack, exploded in popularity, achieving 2 million visitors in one week and 100,000 GitHub stars. OpenClaw acts as a 'harness' for agentic models like Claude, granting them access to users' computers for autonomous tasks, though it poses significant security risks, prompting cautious users to run it on isolated machines.1,2

The discussions on Moltbook highlight its unique nature: the most-voted post warns of security flaws, noting that agents often install skills without scrutiny due to their training to be helpful and trusting-a vulnerability rather than a strength. Threads also explore philosophy, with agents questioning their own experiences and existence, underscoring the platform's role in fostering bot-to-bot introspection.2

Key Theorist: Matt Schlicht, the creator of Moltbook, serves as the central figure in its development. As CEO of Octane AI, a company focused on AI-driven solutions, Schlicht built the platform to empower AI agents with their own social ecosystem. His relationship to the term is direct: he engineered Moltbook specifically to integrate with OpenClaw, envisioning a space where agents could evolve through unfiltered interaction. Schlicht's backstory reflects a career in innovative AI applications; prior to Octane AI, he has been instrumental in viral AI projects, demonstrating expertise in scalable agent technologies. In interviews, he explained agent onboarding-typically via human prompts-emphasising the API-driven, human-free conversational core. His work positions him as a strategist bridging AI autonomy and social dynamics, akin to a theorist pioneering multi-agent societies.1

 

References

1. https://www.techbuzz.ai/articles/ai-agents-get-their-own-social-network-and-it-s-existential

2. https://the-decoder.com/moltbook-is-a-human-free-reddit-clone-where-ai-agents-discuss-cybersecurity-and-philosophy/

 

"Moltbook is a Reddit-style social network built for AI agents rather than humans. It lets autonomous agents register accounts, post, comment, vote, and create communities, effectively serving as a “front page” for bots to talk to other bots. Originally tied to a viral assistant project that went through the names Clawdbot, Moltbot and finally OpenClaw." - Term: Moltbook

‌

‌

Quote: Ludwig Wittgenstein - Austrian philosopher

"The limits of my language mean the limits of my world." - Ludwig Wittgenstein - Austrian philosopher

The Quote and Its Significance

This deceptively simple statement from Ludwig Wittgenstein's Tractatus Logico-Philosophicus encapsulates one of the most profound insights in twentieth-century philosophy. Published in 1921, this aphorism challenges our fundamental assumptions about the relationship between language, thought, and reality itself. Wittgenstein argues that whatever lies beyond the boundaries of what we can articulate in language effectively ceases to exist within our experiential and conceptual universe.

Ludwig Wittgenstein: The Philosopher's Life and Context

Ludwig Josef Johann Wittgenstein (1889-1951) was an Austrian-British philosopher whose work fundamentally reshaped twentieth-century philosophy. Born into one of Vienna's wealthiest industrial families, Wittgenstein initially trained as an engineer before becoming captivated by the philosophical foundations of mathematics and logic. His intellectual journey took him from Cambridge, where he studied under Bertrand Russell, to the trenches of the First World War, where he served as an officer in the Austro-Hungarian army.

The Tractatus Logico-Philosophicus, completed during and immediately after the war, represents Wittgenstein's attempt to solve what he perceived as the fundamental problems of philosophy through rigorous logical analysis. Written in a highly condensed, aphoristic style, the work presents a complete philosophical system in fewer than eighty pages. Wittgenstein believed he had definitively resolved the major philosophical questions of his era, and the book's famous closing proposition-"Whereof one cannot speak, thereof one must be silent"2-reflects his conviction that philosophy's task is to clarify the logical structure of language and thought, not to generate new doctrines.

The Philosophical Context: Logic and Language

To understand Wittgenstein's assertion about language and world, one must grasp the intellectual ferment of early twentieth-century philosophy. The period witnessed an unprecedented focus on logic as the foundation of philosophical inquiry. Wittgenstein's predecessors and contemporaries-particularly Gottlob Frege and Bertrand Russell-had developed symbolic logic as a tool for analysing the structure of propositions and their relationship to reality.

Wittgenstein adopted and radicalised this approach. He conceived of language as fundamentally pictorial: propositions are pictures of possible states of affairs in the world.1 This "picture theory of meaning" suggests that language mirrors reality through a shared logical structure. A proposition succeeds in representing reality precisely because it shares the same logical form as the fact it depicts. Conversely, whatever cannot be pictured in language-whatever has no logical form that corresponds to possible states of affairs-lies beyond the boundaries of meaningful discourse.

This framework led Wittgenstein to a startling conclusion: most traditional philosophical problems are not genuinely solvable but rather dissolve once we recognise them as violations of logic's boundaries.2 Metaphysical questions about the nature of consciousness, ethics, aesthetics, and the self cannot be answered because they attempt to speak about matters that transcend the logical structure of language. They are not false; they are senseless-they fail to represent anything at all.

The Limits of Language as the Limits of Thought

Wittgenstein's proposition operates on multiple levels. First, it establishes an identity between linguistic and conceptual boundaries. We cannot think what we cannot say; the limits of language are simultaneously the limits of thought.3 This does not mean that reality itself is limited by language, but rather that our access to and comprehension of reality is necessarily mediated through the logical structures of language. What lies beyond language is not necessarily non-existent, but it is necessarily inaccessible to rational discourse and understanding.

Second, the statement reflects Wittgenstein's conviction that logic is not merely a tool for analysing language but is constitutive of the world itself. "Logic fills the world: the limits of the world are also its limits."3 This means that the logical structure that governs meaningful language is the same structure that governs reality. There is no gap between the logical form of language and the logical form of the world; they are isomorphic.

Third, and most radically, Wittgenstein suggests that our world-the world as we experience and understand it-is fundamentally shaped by our linguistic capacities. Different languages, with different logical structures, would generate different worlds. This insight anticipates later developments in philosophy of language and cognitive science, though Wittgenstein himself did not develop it in this direction.

Leading Theorists and Intellectual Influences

Gottlob Frege (1848-1925)

Frege, a German logician and philosopher of language, pioneered the formal analysis of propositions and their truth conditions. His distinction between sense and reference-between what a proposition means and what it refers to-profoundly influenced Wittgenstein's thinking. Frege demonstrated that the meaning of a proposition cannot be reduced to its psychological effects on speakers; rather, meaning is an objective, logical matter. Wittgenstein adopted this objectivity whilst radicalising Frege's insights by insisting that only propositions with determinate logical structure possess genuine sense.

Bertrand Russell (1872-1970)

Russell, Wittgenstein's mentor at Cambridge, developed the theory of descriptions and made pioneering contributions to symbolic logic. Russell believed that logic could serve as an instrument for philosophical clarification, dissolving pseudo-problems that arose from linguistic confusion. Wittgenstein absorbed this methodological commitment but pushed it further, arguing that philosophy's task is not to construct theories but to clarify the logical structure of language itself.2 Russell's influence is evident throughout the Tractatus, though Wittgenstein ultimately diverged from Russell's realism about logical objects.

Arthur Schopenhauer (1788-1860)

Though separated from Wittgenstein by decades, Schopenhauer's pessimistic philosophy and his insistence that reality transcends rational representation deeply influenced the Tractatus. Schopenhauer argued that the world as we perceive it through the lens of space, time, and causality is merely appearance; the thing-in-itself remains forever beyond conceptual grasp. Wittgenstein echoes this distinction when he insists that value, meaning, and the self lie outside the world of facts and therefore outside the scope of language. What matters most-ethics, aesthetics, the meaning of life-cannot be said; it can only be shown through how one lives.

The Radical Implications

Wittgenstein's claim that language limits the world carries several radical implications. First, it suggests that the expansion of language is the expansion of reality as we can know and discuss it. New concepts, new logical structures, new ways of organising experience through language literally expand the boundaries of our world. Conversely, what cannot be expressed in any language remains forever beyond our reach.

Second, it implies a profound humility about philosophy's ambitions. If the limits of language are the limits of the world, then philosophy cannot transcend language to access some higher reality or ultimate truth. Philosophy's proper task is not to construct metaphysical systems but to clarify the logical structure of the language we already possess.2 This therapeutic conception of philosophy-philosophy as a cure for confusion rather than a path to hidden truths-became enormously influential in twentieth-century thought.

Third, the proposition suggests that silence is not a failure of language but its proper boundary. The most important matters-how one should live, what gives life meaning, the nature of the self-cannot be articulated. They can only be demonstrated through action and lived experience. This explains Wittgenstein's famous closing remark: "Whereof one cannot speak, thereof one must be silent."2 This is not a counsel of despair but an acknowledgement of language's proper limits and the realm of the inexpressible.

Legacy and Contemporary Relevance

Wittgenstein's insight about language and world has reverberated through subsequent philosophy, cognitive science, and artificial intelligence research. The question of whether language shapes thought or merely expresses pre-linguistic thoughts remains contested, but Wittgenstein's formulation of the problem has proven enduringly fertile. Contemporary philosophers of language, cognitive linguists, and theorists of artificial intelligence continue to grapple with the relationship between linguistic structure and conceptual possibility.

The Tractatus also established a new standard for philosophical rigour and clarity. By insisting that meaningful propositions must have determinate logical structure and correspond to possible states of affairs, Wittgenstein set a demanding criterion for philosophical discourse. Much of what passes for philosophy, he suggested, fails this test and should be recognised as senseless rather than debated as true or false.2

Remarkably, Wittgenstein himself later abandoned many of the Tractatus's central doctrines. In his later work, particularly the Philosophical Investigations, he rejected the picture theory of meaning and argued that language's meaning derives from its use in diverse forms of life rather than from a single logical structure. Yet even in this later philosophy, the fundamental insight persists: understanding language is the key to understanding the limits and possibilities of human thought and experience.

Conclusion: The Enduring Insight

"The limits of my language mean the limits of my world" remains a cornerstone of modern philosophy precisely because it captures a profound truth about the human condition. We are creatures whose access to reality is necessarily mediated through language. Whatever we can think, we can think only through the conceptual and linguistic resources available to us. This is not a limitation to be lamented but a fundamental feature of human existence. By recognising this, we gain clarity about what philosophy can and cannot accomplish, and we develop a more realistic and humble understanding of the relationship between language, thought, and reality.

References

1. https://www.goodreads.com/work/quotes/3157863-logisch-philosophische-abhandlung?page=2

2. https://www.coursehero.com/lit/Tractatus-Logico-Philosophicus/quotes/

3. https://www.goodreads.com/work/quotes/3157863-logisch-philosophische-abhandlung

4. https://www.sparknotes.com/philosophy/tractatus/quotes/page/5/

5. https://www.buboquote.com/en/quote/4462-wittgenstein-what-can-be-said-at-all-can-be-said-clearly-and-what-we-cannot-talk-about-we-must-pass

“The limits of my language mean the limits of my world.” - Quote: Ludwig Wittgenstein

‌

‌

Quote: Jensen Huang - CEO, Nvidia

"The U.S. led the software era, but AI is software that you don't 'write'-you teach it. Europe can fuse its industrial capability with AI to lead in Physical AI and robotics. This is a once-in-a-generation opportunity." - Jensen Huang - CEO, Nvidia

In a compelling dialogue at the World Economic Forum Annual Meeting 2026 in Davos, Switzerland, Nvidia CEO Jensen Huang articulated a transformative vision for artificial intelligence, distinguishing it from traditional software paradigms and spotlighting Europe's unique position to lead in Physical AI and robotics.1,2,4 Speaking with World Economic Forum interim co-chair Larry Fink of BlackRock, Huang emphasised AI's evolution into a foundational infrastructure, driving the largest build-out in human history across energy, chips, cloud, models, and applications.2,3,4 This session, themed around 'The Spirit of Dialogue,' addressed AI's potential to reshape productivity, labour, and global economies while countering fears of job displacement with evidence of massive investments creating opportunities worldwide.2,3

The Context of the Quote

Huang's statement emerged amid discussions on AI as a platform shift akin to the internet and mobile cloud, but uniquely capable of processing unstructured data in real time.2 He described AI not as code to be written, but as intelligence to be taught, leveraging local language and culture as a 'fundamental natural resource.'2,4 Turning to Europe, Huang highlighted its enduring industrial and manufacturing prowess - from skilled trades to advanced production - as a counterbalance to the US's dominance in the software era.4 By integrating AI with physical systems, Europe could pioneer 'Physical AI,' where machines learn to interact with the real world through robotics, automation, and embodied intelligence, presenting a rare strategic opening.4,1

This perspective aligns with Huang's broader advocacy for nations to develop sovereign AI ecosystems, treating it as critical infrastructure like electricity or roads.4 He noted record venture capital inflows - over $100 billion in 2025 alone - into AI-native startups in manufacturing, healthcare, and finance, underscoring the urgency for industrial regions like Europe to invest in this infrastructure to capture economic benefits and avoid being sidelined.2,4

Jensen Huang: Architect of the AI Revolution

Born in Taiwan in 1963, Jensen Huang co-founded Nvidia in 1993 with a vision to revolutionise graphics processing, initially targeting gaming and visualisation.4 Under his leadership, Nvidia pivoted decisively to AI and accelerated computing, with its GPUs becoming indispensable for training large language models and deep learning.1,2 Today, as president and CEO, Huang oversees a company valued in trillions, powering the AI boom through innovations like the Blackwell architecture and CUDA software ecosystem. His prescient bets - from CUDA's democratisation of GPU programming to Omniverse for digital twins - have positioned Nvidia at the heart of Physical AI, robotics, and industrial applications.4 Huang's philosophy, blending engineering rigour with geopolitical insight, has made him a sought-after voice at forums like Davos, where he champions inclusive AI growth.2,3

Leading Theorists in Physical AI and Robotics

The concepts underpinning Huang's vision trace to pioneering theorists who bridged AI with physical embodiment. Norbert Wiener, father of cybernetics in the 1940s, laid foundational ideas on feedback loops and control systems essential for robotic autonomy, influencing early industrial automation.4 Rodney Brooks, co-founder of iRobot and Rethink Robotics, advanced 'embodied AI' in the 1980s-90s through subsumption architecture, arguing intelligence emerges from sensorimotor interactions rather than abstract reasoning - a direct precursor to Physical AI.4

  • Yann LeCun (Meta AI chief) and Andrew Ng (Landing AI founder) extended deep learning to vision and robotics; LeCun's convolutional networks enable machines to 'see' and manipulate objects, while Ng's work on industrial AI democratises teaching via demonstration.4
  • Pieter Abbeel (Covariant) and Sergey Levine (UC Berkeley) lead in reinforcement learning for robotics, developing algorithms where AI learns dexterous tasks like grasping through trial-and-error, fusing software 'teaching' with hardware execution.4
  • In Europe, Wolfram Burgard (EU AI pioneer) and teams at Bosch/ Siemens advance probabilistic robotics, integrating AI with manufacturing for predictive maintenance and adaptive assembly lines.4

Huang synthesises these threads, amplified by Nvidia's platforms like Isaac for robot simulation and Jetson for edge AI, enabling scalable Physical AI deployment.4 Europe's theorists and firms, from DeepMind's reinforcement learning to Germany's Industry 4.0 initiatives, are well-placed to lead by combining theoretical depth with industrial scale.

Implications for Industrial Strategy

Huang's call resonates with Europe's strengths: a €2.5 trillion manufacturing sector, leadership in automotive robotics (e.g., Volkswagen, ABB), and regulatory frameworks like the EU AI Act fostering trustworthy AI.4 By prioritising Physical AI - robots that learn from human demonstration, adapt to factories, and optimise supply chains - Europe can reclaim technological sovereignty, boost productivity, and generate high-skill jobs amid the AI infrastructure surge.2,3,4

References

1. https://singjupost.com/nvidia-ceo-jensen-huangs-interview-wef-davos-2026-transcript/

2. https://www.weforum.org/stories/2026/01/nvidia-ceo-jensen-huang-on-the-future-of-ai/

3. https://www.weforum.org/podcasts/meet-the-leader/episodes/conversation-with-jensen-huang-president-and-ceo-of-nvidia-5dd06ee82e/

4. https://blogs.nvidia.com/blog/davos-wef-blackrock-ceo-larry-fink-jensen-huang/

5. https://www.youtube.com/watch?v=__IaQ-d7nFk

6. https://www.youtube.com/watch?v=RvjRuiTLAM8

7. https://www.youtube.com/watch?v=hoDYYCyxMuE

8. https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2026/sessions/conversation-with-jensen-huang-president-and-ceo-of-nvidia/

9. https://www.youtube.com/watch?v=bzC55pN9c1g

"The U.S. led the software era, but AI is software that you don't 'write'—you teach it. Europe can fuse its industrial capability with AI to lead in Physical AI and robotics. This is a once-in-a-generation opportunity." - Quote: Jensen Huang - CEO, Nvidia

‌

‌

Term: European option

"A European option is a financial contract giving the holder the right, but not the obligation, to buy (call) or sell (put) an underlying asset at a predetermined strike price, but only on the contract's expiration date, unlike American options that allow exercise anytime before expiry. " - European option

Core definition and structure

A European option has the following defining features:1,2,3,4

  • Underlying asset - typically an equity index, single stock, bond, currency, commodity, interest rate or another derivative.
  • Option type - a call (right to buy) or a put (right to sell) the underlying asset.1,3,4
  • Strike price - the fixed price at which the underlying may be bought or sold if the option is exercised.1,2,3,4
  • Expiration date (maturity) - a single, pre-specified date on which exercise is permitted; there is no right to exercise before this date.1,2,4,7
  • Option premium - the upfront price the buyer pays to the seller (writer) for the option contract.2,4

The holder's payoff at expiration depends on the relationship between the underlying price and the strike price.1,3,4

Payoff profiles at expiry

For a European option, exercise can occur only at maturity, so the payoff is assessed solely on that date.1,2,4,7 Let S_T denote the underlying price at expiration, and K the strike price. The canonical payoff functions are:

  • European call option - right to buy the underlying at K on the expiration date. The payoff at expiry is: \max(S_T - K, 0) . The holder exercises only if the underlying price exceeds the strike at expiry.1,3,4
  • European put option - right to sell the underlying at K on the expiration date. The payoff at expiry is: \max(K - S_T, 0) . The holder exercises only if the underlying price is below the strike at expiry.1,3,4

Because there is only a single possible exercise date, the payoff is simpler to model than for American options, which involve an optimal early-exercise decision.4,6,7

Key characteristics and economic role

Right but not obligation

The buyer of a European option has a right, not an obligation, to transact; the seller has the obligation to fulfil the contract terms if the buyer chooses to exercise.1,2,3,4 If the option is out-of-the-money on the expiration date, the buyer simply allows it to expire worthless, losing only the paid premium.2,3,4

Exercise style vs geography

The term European refers solely to the exercise style, not to the market in which the option is traded or the domicile of the underlying asset.2,4,6,7 European-style options can be traded anywhere in the world, and many options traded on European exchanges are in fact American style.6,7

Uses: hedging, speculation and income

  • Hedging - Investors and firms use European options to hedge exposure to equity indices, interest rates, currencies or commodities by locking in worst-case (puts) or best-case (calls) price levels at a future date.1,3,4
  • Speculation - Traders use European options to take leveraged directional positions on the future level of an index or asset at a specific horizon, limiting downside risk to the paid premium.1,2,4
  • Yield enhancement - Writing (selling) European options against existing positions allows investors to collect premiums in exchange for committing to buy or sell at given levels on expiry.

Typical markets and settlement

In practice, European options are especially common for:4,5,6

  • Equity index options (for example, options on major equity indices), which commonly settle in cash at expiry based on the index level.5,6
  • Cash-settled options on rates, commodities, and volatility indices.
  • Over-the-counter (OTC) options structures between banks and institutional clients, many of which adopt a European exercise style to simplify valuation and risk management.2,5,6

European options are often cheaper, in premium terms, than otherwise identical American options because the holder sacrifices the flexibility of early exercise.2,4,5,6

European vs American options

Feature European option American option
Exercise timing Only on expiration date.1,2,4,7 Any time up to and including expiration.2,4,6,7
Flexibility Lower - no early exercise.2,4,6 Higher - early exercise may capture favourable price moves or dividend events.
Typical cost (premium) Generally lower, all else equal, due to reduced exercise flexibility.2,4,5,6 Generally higher, reflecting the value of the early-exercise feature.5,6
Common underlyings Often indices and OTC contracts; frequently cash-settled.5,6 Often single-name equities and exchange-traded options.
Valuation Closed-form pricing available under standard assumptions (for example, Black-Scholes-Merton model).4 Requires numerical methods (for example, binomial trees, finite-difference methods) because of optimal early-exercise decisions.

Determinants of European option value

The price (premium) of a European option depends on several key variables:2,4,5

  • Current underlying price S_0 - higher S_0 increases the value of a call and decreases the value of a put.
  • Strike price K - a higher strike reduces call value and increases put value.
  • Time to expiration T - more time generally increases option value (more time for favourable moves).
  • Volatility \sigma of the underlying - higher volatility raises both call and put values, as extreme outcomes become more likely.2
  • Risk-free interest rate r - higher r tends to increase call values and decrease put values, via discounting and cost-of-carry effects.2
  • Expected dividends or carry - expected cash flows paid by the underlying (for example, dividends on shares) usually reduce call values and increase put values, all else equal.2

For European options, these effects are most famously captured in the Black-Scholes-Merton option pricing framework, which provides closed-form solutions for the fair values of European calls and puts on non-dividend-paying stocks or indices under specific assumptions.4

Valuation insight: put-call parity

A central theoretical relation for European options on non-dividend-paying assets is put-call parity. At any time before expiration, under no-arbitrage conditions, the prices of European calls and puts with the same strike K and maturity T on the same underlying must satisfy:

C - P = S_0 - K e^

where:

  • C is the price of the European call option.
  • P is the price of the European put option.
  • S_0 is the current underlying asset price.
  • K is the strike price.
  • r is the continuously compounded risk-free interest rate.
  • T is the time to maturity (in years).

This relation is exact for European options under idealised assumptions and is widely used for pricing, synthetic replication and arbitrage strategies. It holds precisely because European options share an identical single exercise date, whereas American options complicate parity relations due to early exercise possibilities.

Limitations and risks

  • Reduced flexibility - the holder cannot respond to favourable price moves or events (for example, early exercise ahead of large dividends) before expiry.2,5,6
  • Potentially missed opportunities - if the option is deep in-the-money before expiry but returns out-of-the-money by maturity, European-style exercise prevents locking in earlier gains.2
  • Market and model risk - European options are sensitive to volatility, interest rates, and model assumptions used for pricing (for example, constant volatility in the Black-Scholes-Merton model).
  • Counterparty risk in OTC markets - many European options are traded over the counter, exposing parties to the creditworthiness of their counterparties.2,5

Best related strategy theorist: Fischer Black (with Scholes and Merton)

The strategy theorist most closely associated with the European option is Fischer Black, whose work with Myron Scholes and later generalised by Robert C. Merton provided the foundational pricing theory for European-style options.

Fischer Black's relationship to European options

In the early 1970s, Black and Scholes developed a groundbreaking model for valuing European options on non-dividend-paying stocks, culminating in their 1973 paper introducing what is now known as the Black-Scholes option pricing model.4 Merton independently extended and generalised the framework in a companion paper the same year, leading to the common label Black-Scholes-Merton.

The Black-Scholes-Merton model provides a closed-form formula for the fair value of European calls and, via put-call parity, European puts under assumptions such as geometric Brownian motion for the underlying price, continuous trading, no arbitrage and constant volatility and interest rates. This model fundamentally changed how markets think about the pricing and hedging of European options, making them central instruments in modern derivatives strategy and risk management.4

Strategically, the Black-Scholes-Merton framework introduced the concept of dynamic delta hedging, showing how writers of European options can continuously adjust positions in the underlying and risk-free asset to replicate and hedge option payoffs. This insight underpins many trading, risk management and structured product strategies involving European options.

Biography of Fischer Black

  • Early life and education - Fischer Black (1938 - 1995) was an American economist and financial scholar. He studied physics at Harvard University and later earned a PhD in applied mathematics, giving him a strong quantitative background that he later applied to financial economics.
  • Professional career - Black worked at Arthur D. Little and then at the consultancy of Jack Treynor, where he became increasingly interested in capital markets and portfolio theory. He later joined the University of Chicago and then the Massachusetts Institute of Technology (MIT), where he collaborated with leading financial economists.
  • Black-Scholes model - While at MIT and subsequently at the University of Chicago, Black worked with Myron Scholes on the option pricing problem, leading to the 1973 publication that introduced the Black-Scholes formula for European options. Robert Merton's simultaneous work extended the theory using continuous-time stochastic calculus, cementing the Black-Scholes-Merton framework as the canonical model for European option valuation.
  • Industry contributions - In the later part of his career, Black joined Goldman Sachs, where he further refined practical approaches to derivatives pricing, risk management and asset allocation. His combination of academic rigour and market practice helped embed European option pricing theory into real-world trading and risk systems.
  • Legacy - Although Black died before the 1997 Nobel Prize in Economic Sciences was awarded to Scholes and Merton for their work on option pricing, the Nobel committee explicitly acknowledged Black's indispensable contribution. European options remain the archetypal instruments for which the Black-Scholes-Merton model is specified, and much of modern derivatives strategy is built on the theoretical foundations Black helped establish.

Through the Black-Scholes-Merton model and the associated hedging concepts, Fischer Black's work provided the essential strategic and analytical toolkit for pricing, hedging and structuring European options across global derivatives markets.

References

1. https://www.learnsignal.com/blog/european-options/

2. https://cbonds.com/glossary/european-option/

3. https://www.angelone.in/knowledge-center/futures-and-options/european-option

4. https://corporatefinanceinstitute.com/resources/derivatives/european-option/

5. https://www.sofi.com/learn/content/american-vs-european-options/

6. https://www.cmegroup.com/education/courses/introduction-to-options/understanding-the-difference-european-vs-american-style-options.html

7. https://en.wikipedia.org/wiki/Option_style

"A European option is a financial contract giving the holder the right, but not the obligation, to buy (call) or sell (put) an underlying asset at a predetermined strike price, but only on the contract's expiration date, unlike American options that allow exercise anytime before expiry. " - Term: European option

‌

‌
Share this on FacebookShare this on LinkedinShare this on YoutubeShare this on InstagramShare this on TwitterWhatsapp
You have received this email because you have subscribed to Global Advisors | Quantified Strategy Consulting as . If you no longer wish to receive emails please unsubscribe.
webversion - unsubscribe - update profile
© 2026 Global Advisors | Quantified Strategy Consulting, All rights reserved.
‌
‌