| |
|
Our selection of the top business news sources on the web.
AM edition. Issue number 1210
Latest 10 stories. Click the button for more.
|
| |
"The limits of my language mean the limits of my world." - Ludwig Wittgenstein - Austrian philosopher
The Quote and Its Significance
This deceptively simple statement from Ludwig Wittgenstein's Tractatus Logico-Philosophicus encapsulates one of the most profound insights in twentieth-century philosophy. Published in 1921, this aphorism challenges our fundamental assumptions about the relationship between language, thought, and reality itself. Wittgenstein argues that whatever lies beyond the boundaries of what we can articulate in language effectively ceases to exist within our experiential and conceptual universe.
Ludwig Wittgenstein: The Philosopher's Life and Context
Ludwig Josef Johann Wittgenstein (1889-1951) was an Austrian-British philosopher whose work fundamentally reshaped twentieth-century philosophy. Born into one of Vienna's wealthiest industrial families, Wittgenstein initially trained as an engineer before becoming captivated by the philosophical foundations of mathematics and logic. His intellectual journey took him from Cambridge, where he studied under Bertrand Russell, to the trenches of the First World War, where he served as an officer in the Austro-Hungarian army.
The Tractatus Logico-Philosophicus, completed during and immediately after the war, represents Wittgenstein's attempt to solve what he perceived as the fundamental problems of philosophy through rigorous logical analysis. Written in a highly condensed, aphoristic style, the work presents a complete philosophical system in fewer than eighty pages. Wittgenstein believed he had definitively resolved the major philosophical questions of his era, and the book's famous closing proposition-"Whereof one cannot speak, thereof one must be silent"2-reflects his conviction that philosophy's task is to clarify the logical structure of language and thought, not to generate new doctrines.
The Philosophical Context: Logic and Language
To understand Wittgenstein's assertion about language and world, one must grasp the intellectual ferment of early twentieth-century philosophy. The period witnessed an unprecedented focus on logic as the foundation of philosophical inquiry. Wittgenstein's predecessors and contemporaries-particularly Gottlob Frege and Bertrand Russell-had developed symbolic logic as a tool for analysing the structure of propositions and their relationship to reality.
Wittgenstein adopted and radicalised this approach. He conceived of language as fundamentally pictorial: propositions are pictures of possible states of affairs in the world.1 This "picture theory of meaning" suggests that language mirrors reality through a shared logical structure. A proposition succeeds in representing reality precisely because it shares the same logical form as the fact it depicts. Conversely, whatever cannot be pictured in language-whatever has no logical form that corresponds to possible states of affairs-lies beyond the boundaries of meaningful discourse.
This framework led Wittgenstein to a startling conclusion: most traditional philosophical problems are not genuinely solvable but rather dissolve once we recognise them as violations of logic's boundaries.2 Metaphysical questions about the nature of consciousness, ethics, aesthetics, and the self cannot be answered because they attempt to speak about matters that transcend the logical structure of language. They are not false; they are senseless-they fail to represent anything at all.
The Limits of Language as the Limits of Thought
Wittgenstein's proposition operates on multiple levels. First, it establishes an identity between linguistic and conceptual boundaries. We cannot think what we cannot say; the limits of language are simultaneously the limits of thought.3 This does not mean that reality itself is limited by language, but rather that our access to and comprehension of reality is necessarily mediated through the logical structures of language. What lies beyond language is not necessarily non-existent, but it is necessarily inaccessible to rational discourse and understanding.
Second, the statement reflects Wittgenstein's conviction that logic is not merely a tool for analysing language but is constitutive of the world itself. "Logic fills the world: the limits of the world are also its limits."3 This means that the logical structure that governs meaningful language is the same structure that governs reality. There is no gap between the logical form of language and the logical form of the world; they are isomorphic.
Third, and most radically, Wittgenstein suggests that our world-the world as we experience and understand it-is fundamentally shaped by our linguistic capacities. Different languages, with different logical structures, would generate different worlds. This insight anticipates later developments in philosophy of language and cognitive science, though Wittgenstein himself did not develop it in this direction.
Leading Theorists and Intellectual Influences
Gottlob Frege (1848-1925)
Frege, a German logician and philosopher of language, pioneered the formal analysis of propositions and their truth conditions. His distinction between sense and reference-between what a proposition means and what it refers to-profoundly influenced Wittgenstein's thinking. Frege demonstrated that the meaning of a proposition cannot be reduced to its psychological effects on speakers; rather, meaning is an objective, logical matter. Wittgenstein adopted this objectivity whilst radicalising Frege's insights by insisting that only propositions with determinate logical structure possess genuine sense.
Bertrand Russell (1872-1970)
Russell, Wittgenstein's mentor at Cambridge, developed the theory of descriptions and made pioneering contributions to symbolic logic. Russell believed that logic could serve as an instrument for philosophical clarification, dissolving pseudo-problems that arose from linguistic confusion. Wittgenstein absorbed this methodological commitment but pushed it further, arguing that philosophy's task is not to construct theories but to clarify the logical structure of language itself.2 Russell's influence is evident throughout the Tractatus, though Wittgenstein ultimately diverged from Russell's realism about logical objects.
Arthur Schopenhauer (1788-1860)
Though separated from Wittgenstein by decades, Schopenhauer's pessimistic philosophy and his insistence that reality transcends rational representation deeply influenced the Tractatus. Schopenhauer argued that the world as we perceive it through the lens of space, time, and causality is merely appearance; the thing-in-itself remains forever beyond conceptual grasp. Wittgenstein echoes this distinction when he insists that value, meaning, and the self lie outside the world of facts and therefore outside the scope of language. What matters most-ethics, aesthetics, the meaning of life-cannot be said; it can only be shown through how one lives.
The Radical Implications
Wittgenstein's claim that language limits the world carries several radical implications. First, it suggests that the expansion of language is the expansion of reality as we can know and discuss it. New concepts, new logical structures, new ways of organising experience through language literally expand the boundaries of our world. Conversely, what cannot be expressed in any language remains forever beyond our reach.
Second, it implies a profound humility about philosophy's ambitions. If the limits of language are the limits of the world, then philosophy cannot transcend language to access some higher reality or ultimate truth. Philosophy's proper task is not to construct metaphysical systems but to clarify the logical structure of the language we already possess.2 This therapeutic conception of philosophy-philosophy as a cure for confusion rather than a path to hidden truths-became enormously influential in twentieth-century thought.
Third, the proposition suggests that silence is not a failure of language but its proper boundary. The most important matters-how one should live, what gives life meaning, the nature of the self-cannot be articulated. They can only be demonstrated through action and lived experience. This explains Wittgenstein's famous closing remark: "Whereof one cannot speak, thereof one must be silent."2 This is not a counsel of despair but an acknowledgement of language's proper limits and the realm of the inexpressible.
Legacy and Contemporary Relevance
Wittgenstein's insight about language and world has reverberated through subsequent philosophy, cognitive science, and artificial intelligence research. The question of whether language shapes thought or merely expresses pre-linguistic thoughts remains contested, but Wittgenstein's formulation of the problem has proven enduringly fertile. Contemporary philosophers of language, cognitive linguists, and theorists of artificial intelligence continue to grapple with the relationship between linguistic structure and conceptual possibility.
The Tractatus also established a new standard for philosophical rigour and clarity. By insisting that meaningful propositions must have determinate logical structure and correspond to possible states of affairs, Wittgenstein set a demanding criterion for philosophical discourse. Much of what passes for philosophy, he suggested, fails this test and should be recognised as senseless rather than debated as true or false.2
Remarkably, Wittgenstein himself later abandoned many of the Tractatus's central doctrines. In his later work, particularly the Philosophical Investigations, he rejected the picture theory of meaning and argued that language's meaning derives from its use in diverse forms of life rather than from a single logical structure. Yet even in this later philosophy, the fundamental insight persists: understanding language is the key to understanding the limits and possibilities of human thought and experience.
Conclusion: The Enduring Insight
"The limits of my language mean the limits of my world" remains a cornerstone of modern philosophy precisely because it captures a profound truth about the human condition. We are creatures whose access to reality is necessarily mediated through language. Whatever we can think, we can think only through the conceptual and linguistic resources available to us. This is not a limitation to be lamented but a fundamental feature of human existence. By recognising this, we gain clarity about what philosophy can and cannot accomplish, and we develop a more realistic and humble understanding of the relationship between language, thought, and reality.
References
1. https://www.goodreads.com/work/quotes/3157863-logisch-philosophische-abhandlung?page=2
2. https://www.coursehero.com/lit/Tractatus-Logico-Philosophicus/quotes/
3. https://www.goodreads.com/work/quotes/3157863-logisch-philosophische-abhandlung
4. https://www.sparknotes.com/philosophy/tractatus/quotes/page/5/
5. https://www.buboquote.com/en/quote/4462-wittgenstein-what-can-be-said-at-all-can-be-said-clearly-and-what-we-cannot-talk-about-we-must-pass

|
| |
| |
"The U.S. led the software era, but AI is software that you don't 'write'-you teach it. Europe can fuse its industrial capability with AI to lead in Physical AI and robotics. This is a once-in-a-generation opportunity." - Jensen Huang - CEO, Nvidia
In a compelling dialogue at the World Economic Forum Annual Meeting 2026 in Davos, Switzerland, Nvidia CEO Jensen Huang articulated a transformative vision for artificial intelligence, distinguishing it from traditional software paradigms and spotlighting Europe's unique position to lead in Physical AI and robotics.1,2,4 Speaking with World Economic Forum interim co-chair Larry Fink of BlackRock, Huang emphasised AI's evolution into a foundational infrastructure, driving the largest build-out in human history across energy, chips, cloud, models, and applications.2,3,4 This session, themed around 'The Spirit of Dialogue,' addressed AI's potential to reshape productivity, labour, and global economies while countering fears of job displacement with evidence of massive investments creating opportunities worldwide.2,3
The Context of the Quote
Huang's statement emerged amid discussions on AI as a platform shift akin to the internet and mobile cloud, but uniquely capable of processing unstructured data in real time.2 He described AI not as code to be written, but as intelligence to be taught, leveraging local language and culture as a 'fundamental natural resource.'2,4 Turning to Europe, Huang highlighted its enduring industrial and manufacturing prowess - from skilled trades to advanced production - as a counterbalance to the US's dominance in the software era.4 By integrating AI with physical systems, Europe could pioneer 'Physical AI,' where machines learn to interact with the real world through robotics, automation, and embodied intelligence, presenting a rare strategic opening.4,1
This perspective aligns with Huang's broader advocacy for nations to develop sovereign AI ecosystems, treating it as critical infrastructure like electricity or roads.4 He noted record venture capital inflows - over $100 billion in 2025 alone - into AI-native startups in manufacturing, healthcare, and finance, underscoring the urgency for industrial regions like Europe to invest in this infrastructure to capture economic benefits and avoid being sidelined.2,4
Jensen Huang: Architect of the AI Revolution
Born in Taiwan in 1963, Jensen Huang co-founded Nvidia in 1993 with a vision to revolutionise graphics processing, initially targeting gaming and visualisation.4 Under his leadership, Nvidia pivoted decisively to AI and accelerated computing, with its GPUs becoming indispensable for training large language models and deep learning.1,2 Today, as president and CEO, Huang oversees a company valued in trillions, powering the AI boom through innovations like the Blackwell architecture and CUDA software ecosystem. His prescient bets - from CUDA's democratisation of GPU programming to Omniverse for digital twins - have positioned Nvidia at the heart of Physical AI, robotics, and industrial applications.4 Huang's philosophy, blending engineering rigour with geopolitical insight, has made him a sought-after voice at forums like Davos, where he champions inclusive AI growth.2,3
Leading Theorists in Physical AI and Robotics
The concepts underpinning Huang's vision trace to pioneering theorists who bridged AI with physical embodiment. Norbert Wiener, father of cybernetics in the 1940s, laid foundational ideas on feedback loops and control systems essential for robotic autonomy, influencing early industrial automation.4 Rodney Brooks, co-founder of iRobot and Rethink Robotics, advanced 'embodied AI' in the 1980s-90s through subsumption architecture, arguing intelligence emerges from sensorimotor interactions rather than abstract reasoning - a direct precursor to Physical AI.4
- Yann LeCun (Meta AI chief) and Andrew Ng (Landing AI founder) extended deep learning to vision and robotics; LeCun's convolutional networks enable machines to 'see' and manipulate objects, while Ng's work on industrial AI democratises teaching via demonstration.4
- Pieter Abbeel (Covariant) and Sergey Levine (UC Berkeley) lead in reinforcement learning for robotics, developing algorithms where AI learns dexterous tasks like grasping through trial-and-error, fusing software 'teaching' with hardware execution.4
- In Europe, Wolfram Burgard (EU AI pioneer) and teams at Bosch/ Siemens advance probabilistic robotics, integrating AI with manufacturing for predictive maintenance and adaptive assembly lines.4
Huang synthesises these threads, amplified by Nvidia's platforms like Isaac for robot simulation and Jetson for edge AI, enabling scalable Physical AI deployment.4 Europe's theorists and firms, from DeepMind's reinforcement learning to Germany's Industry 4.0 initiatives, are well-placed to lead by combining theoretical depth with industrial scale.
Implications for Industrial Strategy
Huang's call resonates with Europe's strengths: a €2.5 trillion manufacturing sector, leadership in automotive robotics (e.g., Volkswagen, ABB), and regulatory frameworks like the EU AI Act fostering trustworthy AI.4 By prioritising Physical AI - robots that learn from human demonstration, adapt to factories, and optimise supply chains - Europe can reclaim technological sovereignty, boost productivity, and generate high-skill jobs amid the AI infrastructure surge.2,3,4
References
1. https://singjupost.com/nvidia-ceo-jensen-huangs-interview-wef-davos-2026-transcript/
2. https://www.weforum.org/stories/2026/01/nvidia-ceo-jensen-huang-on-the-future-of-ai/
3. https://www.weforum.org/podcasts/meet-the-leader/episodes/conversation-with-jensen-huang-president-and-ceo-of-nvidia-5dd06ee82e/
4. https://blogs.nvidia.com/blog/davos-wef-blackrock-ceo-larry-fink-jensen-huang/
5. https://www.youtube.com/watch?v=__IaQ-d7nFk
6. https://www.youtube.com/watch?v=RvjRuiTLAM8
7. https://www.youtube.com/watch?v=hoDYYCyxMuE
8. https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2026/sessions/conversation-with-jensen-huang-president-and-ceo-of-nvidia/
9. https://www.youtube.com/watch?v=bzC55pN9c1g

|
| |
| |
"A European option is a financial contract giving the holder the right, but not the obligation, to buy (call) or sell (put) an underlying asset at a predetermined strike price, but only on the contract's expiration date, unlike American options that allow exercise anytime before expiry. " - European option
Core definition and structure
A European option has the following defining features:1,2,3,4
- Underlying asset - typically an equity index, single stock, bond, currency, commodity, interest rate or another derivative.
- Option type - a call (right to buy) or a put (right to sell) the underlying asset.1,3,4
- Strike price - the fixed price at which the underlying may be bought or sold if the option is exercised.1,2,3,4
- Expiration date (maturity) - a single, pre-specified date on which exercise is permitted; there is no right to exercise before this date.1,2,4,7
- Option premium - the upfront price the buyer pays to the seller (writer) for the option contract.2,4
The holder's payoff at expiration depends on the relationship between the underlying price and the strike price.1,3,4
Payoff profiles at expiry
For a European option, exercise can occur only at maturity, so the payoff is assessed solely on that date.1,2,4,7 Let S_T denote the underlying price at expiration, and K the strike price. The canonical payoff functions are:
- European call option - right to buy the underlying at K on the expiration date. The payoff at expiry is: \max(S_T - K, 0) . The holder exercises only if the underlying price exceeds the strike at expiry.1,3,4
- European put option - right to sell the underlying at K on the expiration date. The payoff at expiry is: \max(K - S_T, 0) . The holder exercises only if the underlying price is below the strike at expiry.1,3,4
Because there is only a single possible exercise date, the payoff is simpler to model than for American options, which involve an optimal early-exercise decision.4,6,7
Key characteristics and economic role
Right but not obligation
The buyer of a European option has a right, not an obligation, to transact; the seller has the obligation to fulfil the contract terms if the buyer chooses to exercise.1,2,3,4 If the option is out-of-the-money on the expiration date, the buyer simply allows it to expire worthless, losing only the paid premium.2,3,4
Exercise style vs geography
The term European refers solely to the exercise style, not to the market in which the option is traded or the domicile of the underlying asset.2,4,6,7 European-style options can be traded anywhere in the world, and many options traded on European exchanges are in fact American style.6,7
Uses: hedging, speculation and income
- Hedging - Investors and firms use European options to hedge exposure to equity indices, interest rates, currencies or commodities by locking in worst-case (puts) or best-case (calls) price levels at a future date.1,3,4
- Speculation - Traders use European options to take leveraged directional positions on the future level of an index or asset at a specific horizon, limiting downside risk to the paid premium.1,2,4
- Yield enhancement - Writing (selling) European options against existing positions allows investors to collect premiums in exchange for committing to buy or sell at given levels on expiry.
Typical markets and settlement
In practice, European options are especially common for:4,5,6
- Equity index options (for example, options on major equity indices), which commonly settle in cash at expiry based on the index level.5,6
- Cash-settled options on rates, commodities, and volatility indices.
- Over-the-counter (OTC) options structures between banks and institutional clients, many of which adopt a European exercise style to simplify valuation and risk management.2,5,6
European options are often cheaper, in premium terms, than otherwise identical American options because the holder sacrifices the flexibility of early exercise.2,4,5,6
European vs American options
| Feature |
European option |
American option |
| Exercise timing |
Only on expiration date.1,2,4,7 |
Any time up to and including expiration.2,4,6,7 |
| Flexibility |
Lower - no early exercise.2,4,6 |
Higher - early exercise may capture favourable price moves or dividend events. |
| Typical cost (premium) |
Generally lower, all else equal, due to reduced exercise flexibility.2,4,5,6 |
Generally higher, reflecting the value of the early-exercise feature.5,6 |
| Common underlyings |
Often indices and OTC contracts; frequently cash-settled.5,6 |
Often single-name equities and exchange-traded options. |
| Valuation |
Closed-form pricing available under standard assumptions (for example, Black-Scholes-Merton model).4 |
Requires numerical methods (for example, binomial trees, finite-difference methods) because of optimal early-exercise decisions. |
Determinants of European option value
The price (premium) of a European option depends on several key variables:2,4,5
- Current underlying price S_0 - higher S_0 increases the value of a call and decreases the value of a put.
- Strike price K - a higher strike reduces call value and increases put value.
- Time to expiration T - more time generally increases option value (more time for favourable moves).
- Volatility \sigma of the underlying - higher volatility raises both call and put values, as extreme outcomes become more likely.2
- Risk-free interest rate r - higher r tends to increase call values and decrease put values, via discounting and cost-of-carry effects.2
- Expected dividends or carry - expected cash flows paid by the underlying (for example, dividends on shares) usually reduce call values and increase put values, all else equal.2
For European options, these effects are most famously captured in the Black-Scholes-Merton option pricing framework, which provides closed-form solutions for the fair values of European calls and puts on non-dividend-paying stocks or indices under specific assumptions.4
Valuation insight: put-call parity
A central theoretical relation for European options on non-dividend-paying assets is put-call parity. At any time before expiration, under no-arbitrage conditions, the prices of European calls and puts with the same strike K and maturity T on the same underlying must satisfy:
C - P = S_0 - K e^
where:
- C is the price of the European call option.
- P is the price of the European put option.
- S_0 is the current underlying asset price.
- K is the strike price.
- r is the continuously compounded risk-free interest rate.
- T is the time to maturity (in years).
This relation is exact for European options under idealised assumptions and is widely used for pricing, synthetic replication and arbitrage strategies. It holds precisely because European options share an identical single exercise date, whereas American options complicate parity relations due to early exercise possibilities.
Limitations and risks
- Reduced flexibility - the holder cannot respond to favourable price moves or events (for example, early exercise ahead of large dividends) before expiry.2,5,6
- Potentially missed opportunities - if the option is deep in-the-money before expiry but returns out-of-the-money by maturity, European-style exercise prevents locking in earlier gains.2
- Market and model risk - European options are sensitive to volatility, interest rates, and model assumptions used for pricing (for example, constant volatility in the Black-Scholes-Merton model).
- Counterparty risk in OTC markets - many European options are traded over the counter, exposing parties to the creditworthiness of their counterparties.2,5
Best related strategy theorist: Fischer Black (with Scholes and Merton)
The strategy theorist most closely associated with the European option is Fischer Black, whose work with Myron Scholes and later generalised by Robert C. Merton provided the foundational pricing theory for European-style options.
Fischer Black's relationship to European options
In the early 1970s, Black and Scholes developed a groundbreaking model for valuing European options on non-dividend-paying stocks, culminating in their 1973 paper introducing what is now known as the Black-Scholes option pricing model.4 Merton independently extended and generalised the framework in a companion paper the same year, leading to the common label Black-Scholes-Merton.
The Black-Scholes-Merton model provides a closed-form formula for the fair value of European calls and, via put-call parity, European puts under assumptions such as geometric Brownian motion for the underlying price, continuous trading, no arbitrage and constant volatility and interest rates. This model fundamentally changed how markets think about the pricing and hedging of European options, making them central instruments in modern derivatives strategy and risk management.4
Strategically, the Black-Scholes-Merton framework introduced the concept of dynamic delta hedging, showing how writers of European options can continuously adjust positions in the underlying and risk-free asset to replicate and hedge option payoffs. This insight underpins many trading, risk management and structured product strategies involving European options.
Biography of Fischer Black
- Early life and education - Fischer Black (1938 - 1995) was an American economist and financial scholar. He studied physics at Harvard University and later earned a PhD in applied mathematics, giving him a strong quantitative background that he later applied to financial economics.
- Professional career - Black worked at Arthur D. Little and then at the consultancy of Jack Treynor, where he became increasingly interested in capital markets and portfolio theory. He later joined the University of Chicago and then the Massachusetts Institute of Technology (MIT), where he collaborated with leading financial economists.
- Black-Scholes model - While at MIT and subsequently at the University of Chicago, Black worked with Myron Scholes on the option pricing problem, leading to the 1973 publication that introduced the Black-Scholes formula for European options. Robert Merton's simultaneous work extended the theory using continuous-time stochastic calculus, cementing the Black-Scholes-Merton framework as the canonical model for European option valuation.
- Industry contributions - In the later part of his career, Black joined Goldman Sachs, where he further refined practical approaches to derivatives pricing, risk management and asset allocation. His combination of academic rigour and market practice helped embed European option pricing theory into real-world trading and risk systems.
- Legacy - Although Black died before the 1997 Nobel Prize in Economic Sciences was awarded to Scholes and Merton for their work on option pricing, the Nobel committee explicitly acknowledged Black's indispensable contribution. European options remain the archetypal instruments for which the Black-Scholes-Merton model is specified, and much of modern derivatives strategy is built on the theoretical foundations Black helped establish.
Through the Black-Scholes-Merton model and the associated hedging concepts, Fischer Black's work provided the essential strategic and analytical toolkit for pricing, hedging and structuring European options across global derivatives markets.
References
1. https://www.learnsignal.com/blog/european-options/
2. https://cbonds.com/glossary/european-option/
3. https://www.angelone.in/knowledge-center/futures-and-options/european-option
4. https://corporatefinanceinstitute.com/resources/derivatives/european-option/
5. https://www.sofi.com/learn/content/american-vs-european-options/
6. https://www.cmegroup.com/education/courses/introduction-to-options/understanding-the-difference-european-vs-american-style-options.html
7. https://en.wikipedia.org/wiki/Option_style

|
| |
| |
"For the first time in human history, we have access to systems that do not just passively store information, but actively work against that information we give it while we sleep and do other things-systems that can classify, route, summarize, surface, or nudge." - Nate B. Jones - On "Second Brains"
Context of the Quote
This striking observation comes from Nate B. Jones in his video Why 2026 Is the Year to Build a Second Brain (And Why You NEED One), where he argues that human brains were never designed for storage but for thinking.1 Jones highlights the cognitive tax of forcing memory onto our minds, which leads to forgotten details in relationships and missed opportunities.1 Traditional systems demand effort at inopportune moments-like tagging notes during a meeting or drive-forcing users to handle classification, routing, and organisation in real time.1
Jones contrasts this with AI-powered second brains: frictionless systems where capturing a thought takes seconds, after which AI classifiers and routers automatically sort it into buckets like people, projects, ideas, or tasks-without user intervention.1 These systems include bouncers to filter junk, ensuring trust and preventing the 'junk drawer' effect that kills most note-taking apps.1 The result is an 'AI loop' that works tirelessly, extracting details, writing summaries, and maintaining a clean memory layer even when the user sleeps or focuses elsewhere.1
Who is Nate B. Jones?
Nate B. Jones is a prominent voice in AI strategy and productivity, running the YouTube channel AI News & Strategy Daily with over 122,000 subscribers.1 He produces content on leveraging AI for career enhancement, building no-code apps, and creating personal knowledge systems.4,5 Jones shares practical guides, such as his Bridge the Implementation Gap: Build Your AI Second Brain, which outlines step-by-step setups using tools like Notion, Obsidian, and Mem.3
His work targets knowledge workers and teams, addressing pitfalls like perfectionism and tool overload.3 In another video, How I Built a Second Brain with AI (The 4 Meta-Skills), he demonstrates offloading cognitive load through AI-driven reflection, identity debugging, and frameworks that enable clearer thinking and execution.2 Jones exemplifies rapid AI application, such as building a professional-looking travel app in ChatGPT in 25 minutes without code.4 His philosophy: AI second brains create compounding assets that reduce information chaos, boost decision-making, and free humans for deep work.3
Backstory of 'Second Brains'
The concept of a second brain builds on decades of personal knowledge management (PKM). It gained traction with Tiago Forte, whose 2022 book Building a Second Brain popularised the CODE framework: Capture, Organise, Distil, Express. Forte's system emphasises turning notes into actionable insights, but relies heavily on user-driven organisation-prone to failure due to taxonomy decisions at capture time.1
Pre-AI tools like Evernote and Roam Research introduced linking and search, yet still demanded active sorting.3 Jones evolves this into AI-native systems, where machine learning handles the heavy lifting: classifiers decide buckets, summarisers extract essence, and nudges surface relevance.1,3 This aligns with 2026's projected AI maturity, making frictionless capture (under 5 seconds) viable and consistent.1
Leading Theorists in AI-Augmented Cognition
- Tiago Forte: Pioneer of modern second brains. His PARA method (Projects, Areas, Resources, Archives) structures knowledge for action. Forte stresses 'progressive summarisation' to distil notes, influencing AI adaptations like Jones's sorters and extractors.3
- Andy Matuschak: Creator of 'evergreen notes' in tools like Roam. Advocates spaced repetition and networked thought, arguing brains excel at pattern-matching, not rote storage-echoed in Jones's anti-junk-drawer bouncers.1
- Nick Milo: Obsidian evangelist, promotes 'linking your thinking' via bi-directional links. His work prefigures AI surfacing of connections across notes.3
- David Allen: GTD (Getting Things Done) founder. Introduced capture to zero cognitive load, but manual. AI second brains automate his 'next actions' routing.1
- Herbert Simon: Nobel economist on bounded rationality. Coined 'satisficing'-his ideas underpin why AI classifiers beat human taxonomy, freeing mental bandwidth.1
These theorists converge on offloading storage to amplify thinking. Jones synthesises their insights with AI, creating systems that not only store but work-classifying, nudging, and evolving autonomously.1,2,3
References
1. https://www.youtube.com/watch?v=0TpON5T-Sw4
2. https://www.youtube.com/watch?v=0k6IznDODPA
3. https://www.natebjones.com/prompts-and-guides/products/second-brain
4. https://natesnewsletter.substack.com/p/i-built-a-10k-looking-ai-app-in-chatgpt
5. https://www.youtube.com/watch?v=UhyxDdHuM0A

|
| |
| |
"ROI doesn't come from creating a very large model; 95% of work can happen with models of 20 or 50 billion parameters." - Ashwini Vaishnaw - Minister of Electronics and IT, India
Delivered at the World Economic Forum (WEF) in Davos 2026, this statement by Ashwini Vaishnaw, India's Minister of Electronics and Information Technology, encapsulates a pragmatic approach to artificial intelligence deployment amid global discussions on technology sovereignty and economic impact1,2. Speaking under the theme 'A Spirit of Dialogue' from 19 to 23 January 2026, Vaishnaw positioned India not merely as a consumer of foreign AI but as a co-creator, emphasising efficiency over scale in model development1. The quote emerged during his rebuttal to IMF Managing Director Kristalina Georgieva's characterisation of India as a 'second-tier' AI power, with Vaishnaw citing Stanford University's AI Index to affirm India's third-place ranking in AI preparedness and second in AI talent2.
Ashwini Vaishnaw: Architect of India's Digital Ambition
Ashwini Vaishnaw, a chartered accountant and IAS officer of the 1994 batch (Muslim-Rajasthan cadre), has risen to become a pivotal figure in India's technological transformation1. Appointed Minister of Electronics and Information Technology in 2021, alongside portfolios in Railways, Communications, and Information & Broadcasting, Vaishnaw has spearheaded initiatives like the India Semiconductor Mission and the push for sovereign AI1. His tenure has attracted major investments, including Google's $15 billion gigawatt-scale AI data centre in Visakhapatnam and partnerships with Meta on AI safety and IBM on advanced chip technology (7nm and 2nm nodes)1. At Davos 2026, he outlined India's appeal as a 'bright spot' for global investors, citing stable democracy, policy continuity, and projected 6-8% real GDP growth1. Vaishnaw's vision extends to hosting the India AI Impact Summit in New Delhi on 19-20 February 2026, showcasing a 'People-Planet-Progress' framework for AI safety and global standards1,3.
Context: India's Five-Layer Sovereign AI Stack
Vaishnaw framed the quote within India's comprehensive 'Sovereign AI Stack', a methodical strategy across five layers to achieve technological independence within a year1,2,4. This includes:
- Application Layer: Real-world deployments in agriculture, health, governance, and enterprise services, where India aims to be the world's largest supplier2,4.
- Model Layer: A 'bouquet' of domestic models with 20-50 billion parameters, sufficient for 95% of use cases, prioritising diffusion, productivity, and ROI over gigantic foundational models1,2.
- Semiconductor Layer: Indigenous design and manufacturing targeting 2nm nodes1.
- Infrastructure Layer: National 38,000 GPU compute pool and gigawatt-scale data centres powered by clean energy and Small Modular Reactors (SMRs)1.
- Energy Layer: Sustainable power solutions to fuel AI growth2.
This approach counters the resource-intensive race for trillion-parameter models, focusing on widespread adoption in emerging markets like India, where efficiency drives economic returns2,5.
Leading Theorists on Small Language Models and AI Efficiency
The emphasis on smaller models aligns with pioneering research challenging the 'scale-is-all-you-need' paradigm. Andrej Karpathy, former OpenAI and Tesla AI director, has advocated for 'emergent abilities' in models as small as 1-10 billion parameters, arguing that targeted training yields high ROI for specific tasks1,2. Noam Shazeer of Character.AI and Google co-inventor of Transformer architectures, demonstrated with models like Chinchilla (70 billion parameters) that optimal compute allocation outperforms sheer size, influencing efficient scaling laws1. Tim Dettmers, researcher behind the influential 'llm-arxiv-daily' repository, quantified in his 'BitsAndBytes' work how quantisation enables 4-bit inference on 70B models with minimal performance loss, democratising access for resource-constrained environments2.
Further, Sasha Rush (Cornell) and collaborators' 'Scaling Laws for Neural Language Models' (2020) revealed diminishing returns beyond certain sizes, bolstering the case for 20-50B models1. In industry, Meta's Llama series (7B-70B) and Mistral AI's Mixtral 8x7B (effectively 46B active parameters) exemplify mixture-of-experts (MoE) architectures achieving near-frontier performance with lower costs, as validated in benchmarks like MMLU2. These theorists underscore Vaishnaw's point: true power lies in diffusion and application, not model magnitude, particularly for emerging markets pursuing technology strategy5.
Vaishnaw's insight at Davos 2026 thus resonates globally, signalling a shift towards sustainable, ROI-focused AI that empowers nations like India to lead through strategic efficiency rather than brute scale1,2.
References
1. https://economictimes.com/news/india/ashwini-vaishnaw-at-davos-2026-5-key-takeaways-highlighting-indias-semiconductor-pitch-and-roadmap-to-ai-sovereignty-at-wef/ashwini-vaishnaw-at-davos-2026-indias-tech-ai-vision-on-global-stage/slideshow/127145496.cms
2. https://timesofindia.indiatimes.com/business/india-business/its-actually-in-the-first-ashwini-vaishnaws-strong-take-on-imf-chief-calling-india-second-tier-ai-power-heres-why/articleshow/126944177.cms
3. https://www.youtube.com/watch?v=3S04vbuukmE
4. https://www.youtube.com/watch?v=VNGmVGzr4RA
5. https://www.weforum.org/stories/2026/01/live-from-davos-2026-what-to-know-on-day-2/

|
| |
| |
"Mercantilism is an economic theory and policy from the 16th-18th centuries where governments heavily regulated trade to build national wealth and power by maximizing exports, minimizing imports, and accumulating precious metals like gold and silver." - Mercantilism
Mercantilism is an early, modern economic theory and statecraft practice (c. 16th–18th centuries) in which governments heavily regulate trade and production to increase national wealth and power by maximising exports, minimising imports, and accumulating bullion (gold and silver).3,4,2
Comprehensive definition
Mercantilism is an economic doctrine and policy regime that treats wealth as finite and international trade as a zero-sum game, so that one state’s gain is understood to be another’s loss.3,6 Under this view, the purpose of economic activity is not individual welfare but the augmentation of state power, especially in competition with rival nations.3,6
Core features include:
- Bullionism and wealth accumulation
Wealth is measured primarily by a country’s stock of precious metals, especially gold and silver, often called bullion.3,1,2 If a nation lacks mines, it is expected to obtain bullion through a “favourable” balance of trade, i.e. persistent export surpluses.3,2
- Favourable balance of trade
Governments strive to ensure exports exceed imports so that foreign buyers pay the difference in bullion.3,2,4 A favourable balance of trade is engineered via:
- High tariffs and quotas on imports
- Export promotion (subsidies, privileges)
- Restrictions or bans on foreign manufactured goods2,4,5
- Strong, interventionist state
Mercantilism assumes an active government role in regulating the economy to serve national objectives.3,4,5 Typical interventions include:
- Granting monopolies and charters to favoured firms or trading companies (e.g. British East India Company)4
- Regulating wages, prices, and production
- Directing capital to strategic sectors (ships, armaments, textiles)2,5
- Enforcing navigation acts to reserve shipping for national fleets
- Colonialism and economic nationalism
Mercantilism is closely tied to the rise of nation-states and overseas empires.2,4,3 Colonies are designed to:
- Supply raw materials cheaply to the “mother country”
- Provide captive markets for its manufactured exports
- Be forbidden from developing competing manufacturing industries
All trade between colony and metropole is typically reserved as a monopoly of the mother country.3,4
- Population, labour and social discipline
A large population is considered essential to provide soldiers, sailors, workers and domestic consumers.3 Mercantilist states often:
- Promote thrift and saving as virtues
- Pass sumptuary laws limiting luxury imports, to avoid bullion outflows and keep labour disciplined3
- Favour policies that keep wages relatively low to preserve competitiveness and employment in export industries4
- Winners and losers
The system tends to privilege merchants, merchant companies and the state over consumers and small producers.4 High protection raises domestic prices and lowers variety, but increases profits and state revenues through custom duties and controlled markets.2,5
As an overarching logic, mercantilism can be summarised as “economic nationalism for the purpose of building a wealthy and powerful state”.6
Mercantilism in historical context
- Origins and dominance
Mercantilist ideas emerged as feudalism declined and nation-states formed in early modern Europe, notably in England, France, Spain, Portugal and the Dutch Republic.1,2,4 They dominated Western European economic thinking and policy from the 16th century to the late 18th century.3,6
- Practice rather than explicit theory
Proponents such as Thomas Mun (England), Jean-Baptiste Colbert (France) and Antonio Serra (Italy) did not use the word “mercantilism”.3 They wrote about trade, money and statecraft; the label “mercantile system” and later “mercantilism” was coined and popularised by Adam Smith in 1776.3,4,6
- Institutional expression
Mercantilist policy underpinned:
- The Navigation Acts and the rise of British sea power
- French Colbertist industrial policy (textiles, shipbuilding, arsenals)
- Spanish and Portuguese bullion-based imperial systems
- Chartered companies such as the British East India Company, which fused commerce, governance and military force under state-backed monopolies4
- Transition to capitalism and free-trade thought
Mercantilism created conditions for early capitalism by encouraging capital accumulation, long-distance trade networks and early industrial development.3 But it also prompted a sustained intellectual backlash, most famously from Adam Smith and later classical economists, who argued that:
- Wealth is not finite and can be expanded through productivity and specialisation
- Free trade and comparative advantage can benefit all countries, rather than being zero-sum2,4
Critiques and legacy
Classical and later economists criticised mercantilism for:
- Confusing money (bullion) with real wealth (productive capacity, labour, technology)2
- Undermining consumer welfare through high prices and limited choice caused by import restrictions and monopolies2,5
- Fostering rent-seeking alliances between state and merchant elites at the expense of the general public4,6
Although mercantilism is usually considered a superseded doctrine, many contemporary protectionist or “neo-mercantilist” policies—such as aggressive export promotion, managed exchange rates, and strategic trade restrictions—are often described as mercantilist in spirit.2,5
The key strategy theorist: Adam Smith and his relationship to mercantilism
The most important strategic thinker associated with mercantilism—precisely because he dismantled it and re-framed strategy—is Adam Smith (1723–1790), the Scottish moral philosopher and political economist often called the founder of modern economics.2,3,4,6
Although Smith was not a mercantilist, his work provides the definitive critique and strategic re-orientation away from mercantilism, and he is the thinker who named and systematised the concept.
Smith’s engagement with mercantilism
- In An Inquiry into the Nature and Causes of the Wealth of Nations, Smith repeatedly refers to the existing policy regime as the “mercantile system” and subjects it to a detailed historical and analytical critique.3,4,6
- He argues that:
- National wealth lies in the productive powers of labour and capital, not in the mere accumulation of gold and silver.2,6
- Free exchange and competition, not monopolies and trade restraints, are the most reliable mechanisms for increasing overall prosperity.
- International trade can be mutually beneficial, rejecting the zero-sum assumption central to mercantilism.2,4
- Smith maintains that mercantilism benefits a narrow coalition of merchants and manufacturers, who use state power—tariffs, monopolies, trading charters—to secure rents at the expense of the wider population.4,6
In strategic terms, Smith redefined economic statecraft: instead of seeking power through hoarding bullion and favouring particular firms, he proposed that long-run national strength is best served by efficient markets, specialisation and limited government interference.
- Early life and education
Adam Smith was born in Kirkcaldy, Scotland, in 1723.3 He studied at the University of Glasgow, where he encountered the Scottish Enlightenment’s emphasis on reason, moral philosophy and political economy, and later at Balliol College, Oxford.3,6
- Academic and public roles
He became Professor of Logic and later Moral Philosophy at the University of Glasgow, lecturing on ethics, jurisprudence, and political economy.6 His first major work, The Theory of Moral Sentiments, explored sympathy, virtue and the moral foundations of social order.
- European travels and observation of mercantilist systems
From 1764 to 1766, Smith travelled in France and Switzerland as tutor to the Duke of Buccleuch, meeting leading physiocrats and observing French administrative and mercantilist practices first-hand.6 These experiences sharpened his critique of existing systems and influenced his articulation of freer trade and limited government.
- The Wealth of Nations and its impact
Published in 1776,The Wealth of Nations systematically:
- Dissects mercantilist doctrines and practices across Britain and Europe
- Explains the division of labour, market coordination and the role of self-interest under appropriate institutional frameworks
- Sets out a strategic blueprint for economic policy based on “natural liberty”, moderate taxation, minimal trade barriers and competitive markets2,4,6
Smith died in 1790 in Edinburgh, but his analysis of mercantilism reshaped both economic theory and state strategy. Governments gradually moved—unevenly and often incompletely—from mercantilist controls toward liberal, market-oriented trade regimes, making Smith the key intellectual bridge between mercantilist economic nationalism and modern strategic thinking about trade, growth and state power.
References
1. https://legal-resources.uslegalforms.com/m/mercantilism
2. https://corporatefinanceinstitute.com/resources/economics/mercantilism/
3. https://www.britannica.com/money/mercantilism
4. https://www.ebsco.com/research-starters/diplomacy-and-international-relations/mercantilism
5. https://www.economicshelp.org/blog/17553/trade/mercantilism-theory-and-examples/
6. https://www.econlib.org/library/Enc/Mercantilism.html
7. https://dictionary.cambridge.org/us/dictionary/english/mercantilism

|
| |
| |
"We believe the clean technology transition is igniting a new supercycle in critical commodities, with natural resource companies emerging as winners." - J.P. Morgan - On resources
When J.P. Morgan Asset Management framed the clean technology transition in these terms, it captured a profound shift underway at the intersection of climate policy, industrial strategy and global capital allocation.1,5 The quote stands at the heart of their analysis of how decarbonisation is reshaping demand for metals, minerals and energy, and why this is likely to support elevated commodity prices for years rather than months.1
The immediate context is the rapid acceleration of the energy transition. Governments have committed to net zero pathways, corporates face growing regulatory and investor pressure to decarbonise, and consumers are adopting electric vehicles and clean technologies at scale. J.P. Morgan argues that this is not merely an environmental story, but an economic retooling comparable in scale to previous industrial revolutions.1,4
Their research highlights two linked dynamics. First, the decarbonised economy is less fuel-intensive but far more materials-intensive. Replacing fossil fuel power with renewables requires vast quantities of copper, aluminium, nickel, lithium, cobalt, manganese and graphite to build solar and wind farms, grids and storage systems.1 Second, the speed of this transition matters as much as its direction. Even under conservative scenarios, J.P. Morgan estimates substantial increases in demand for critical minerals by 2030; under more ambitious net zero pathways, demand could rise by around 110% over that period, on top of the 50% increase already seen in the previous decade.1
In this framing, natural resource companies - particularly miners and producers of critical minerals - shift from being perceived purely as part of the old carbon-heavy economy to being central enablers of clean technologies. J.P. Morgan points out that while fossil fuel demand will decline over time, the scale of required investment in metals and minerals, as well as transmission infrastructure, effectively re-ranks many resource businesses as strategic assets for the low-carbon future.1 Valuations that once reflected cyclical, late-stage industries may therefore underestimate the structural demand embedded in net zero commitments.
The quote also reflects J.P. Morgan's broader thinking on commodity and energy supercycles. Their research on energy markets describes a supercycle as a sustained period of elevated prices driven by structural forces that can last for a decade or more.3,4 In previous eras, those forces included post-war reconstruction and the rise of China as the world's industrial powerhouse. Today, they see the combination of chronic underinvestment in supply, intensifying climate policy, and rising demand for both traditional and clean energy as setting the stage for a new, complex supercycle.2,3,4
Within the firm, analysts have argued that higher-for-longer interest rates raise the cost of debt and equity for energy producers, reinforcing supply discipline and pushing up the marginal cost of production.3 At the same time, the rapid build-out of renewables is constrained by supply chain, infrastructure and key materials bottlenecks, meaning that legacy fuels still play a significant role even as capital increasingly flows towards clean technologies.3 This dual dynamic - structural demand for critical minerals on the one hand and a constrained, more disciplined fossil fuel sector on the other - underpins the conviction that a supercycle is forming across parts of the commodity complex.
The idea of commodity supercycles predates the current climate transition and has been shaped by several generations of theorists and empirical researchers. In the mid-20th century, economists such as Raúl Prebisch and Hans Singer first highlighted the long-term terms-of-trade challenges faced by commodity exporters, noting that prices for primary products tended to fall relative to manufactured goods over time. Their work prompted an early focus on structural forces in commodity markets, although it emphasised long-run decline rather than extended booms.
Later, analysts began to examine multi-decade patterns of rising and falling prices. Structural models of commodity prices observed that at major stages of economic development - such as the agricultural and industrial revolutions - commodity intensity tends to increase markedly, creating conditions for supercycles.4 These models distinguish between business cycles of a few years, investment cycles spanning roughly a decade, and longer supercycle components that can extend beyond 20 years.4 The supercycle lens gained prominence as researchers studied the commodity surge associated with China's breakneck urbanisation and industrialisation from the late 1990s to the late 2000s.
That China-driven episode became the archetype of a modern commodity supercycle: a powerful, sustained demand shock focused on energy, metals and bulk materials, amplified by long supply lead times and capital expenditure cycles. J.P. Morgan and other institutions have documented how this supercycle drove a 12-year uptrend in prices, culminating before the global financial crisis, followed by a comparably long down-cycle as supply eventually caught up and Chinese growth shifted to a less resource-intensive model.2,4
Academic and market theorists have since refined the concept. They argue that supercycles emerge when three elements coincide. First, there must be a structural, synchronised increase in demand, often tied to a global development episode or technological shift. Second, supply in key commodities must be constrained by geology, capital discipline, regulation or long project lead times. Third, macro-financial conditions - including real interest rates, inflation expectations and currency trends - must align to support investment flows into real assets. The question for today's transition is whether decarbonisation meets these criteria.
On the demand side, the clean tech revolution clearly resembles previous development stages in its resource intensity. J.P. Morgan notes that electric vehicles require significantly more minerals than internal combustion engine cars - roughly six times as much in aggregate when accounting for lithium, nickel, cobalt, manganese and graphite.1 Similarly, building solar and wind capacity, and the vast grid infrastructure to connect them, calls for much more copper and aluminium per unit of capacity than conventional power systems.1 The International Energy Agency's projections, which J.P. Morgan draws on, indicate that even under modest policy assumptions, renewable electricity capacity is set to increase by around 50% by 2030, with more ambitious net zero scenarios implying far steeper growth.1
Supply, however, has been shaped by a decade of caution. After the last supercycle ended, many mining and energy companies cut back capital expenditure, streamlined balance sheets and prioritised shareholder returns. Regulatory processes for new mines lengthened, environmental permitting became more stringent, and social expectations around land use and community impacts increased. The result is that bringing new supplies of copper, nickel or lithium online can take many years and substantial capital, creating a lag between price signals and physical supply.
Theorists of the investment cycle - often identified with work on 8 to 20-year intermediate commodity cycles - argue that such periods of underinvestment sow the seeds for the next up-cycle.4 When demand resurges due to a structural driver, constrained supply leads to persistent price pressures until investment, technology and substitution can rebalance the market. In the case of the energy transition, the requirement for large amounts of specific minerals, combined with concentrated supply in a small number of countries, intensifies this effect and introduces geopolitical considerations.
Another important strand of thought concerns the evolution of energy systems themselves. Analysts focusing on energy supercycles emphasise that transitions historically unfold over multiple decades and rarely proceed smoothly.3,4 Even as clean energy capacity expands rapidly, global energy demand continues to grow, and existing systems must meet rising consumption while new infrastructure is built. J.P. Morgan's energy research describes this as a multi-decade process of "generating and distributing the joules" required to both satisfy demand and progressively decarbonise.3 During this period, traditional energy sources often remain critical, creating complex price dynamics across oil, gas, coal and renewables-linked commodities.
Within this broader theoretical frame, the clean technology transition can be seen as a distinctive supercycle candidate. Unlike the China wave, which centred on industrialisation and urbanisation within one country, the net zero agenda is globally coordinated and policy-driven. It spans power generation, transport, buildings, industry and agriculture, and requires both new physical assets and digital infrastructure. Structural models referenced by J.P. Morgan note that such system-wide investment programmes have historically been associated with sustained periods of elevated commodity intensity.4
At the same time, there is active debate among economists and market strategists about the durability and breadth of any new supercycle. Some caution that efficiency gains, recycling and substitution could cap demand growth in certain minerals over time. Others point to innovation in battery chemistries, alternative materials and manufacturing methods that may reduce reliance on some critical inputs. Still others argue that policy uncertainty and potential fragmentation in global trade could disrupt smooth investment and demand trajectories. Theorists of supercycles emphasise that these are not immutable laws but emergent patterns that can be shaped by technology, politics and finance.
J.P. Morgan's perspective in the quoted insight acknowledges these uncertainties while underscoring the asymmetry in the coming decade. Even in conservative scenarios, their work suggests that demand for critical minerals rises substantially relative to recent history.1 Under more ambitious climate policies, the increase is far greater, and tightness in markets such as copper, nickel, cobalt and lithium appears likely, especially towards the end of the 2020s.1 Against this backdrop, natural resource companies with high-quality assets, disciplined capital allocation and credible sustainability strategies are positioned not as relics of the past, but as essential partners in delivering the energy transition.
This reframing has important implications for investors and corporates alike. For investors, it suggests that the traditional division between "old" resource-heavy industries and "new" clean tech sectors is too simplistic. The hardware of decarbonisation - from EV batteries and charging networks to grid-scale storage, wind turbines and solar farms - depends on a complex upstream ecosystem of miners, processors and materials specialists. For corporates, it highlights the strategic premium on securing access to critical inputs, managing long-term supply contracts, and integrating sustainability into resource development.
The quote from J.P. Morgan thus sits at the confluence of three intellectual streams: long-run theories of commodity supercycles, modern analysis of energy transition dynamics, and evolving views of how natural resource businesses fit into a low-carbon world. It encapsulates the idea that the path to net zero is not dematerialised; instead, it is anchored in physical assets, industrial capabilities and supply chains that must be financed, built and operated over many years. For those able to navigate this terrain - and for the theorists tracing its contours - the clean technology transition is not only an environmental imperative but also a defining economic narrative of the coming decades.
References
1. https://am.jpmorgan.com/hk/en/asset-management/adv/insights/market-insights/market-bulletins/clean-energy-investment/
2. https://www.foxbusiness.com/markets/biden-climate-change-fight-commodities-supercycle
3. https://www.jpmorgan.com/insights/global-research/commodities/energy-supercycle
4. https://www.jpmcc-gcard.com/digest-uploads/2021-summer/Page%2074_79%20GCARD%20Summer%202021%20Jerrett%20042021.pdf
5. https://am.jpmorgan.com/us/en/asset-management/institutional/card-list-libraries/sustainable-insights-climate-tab-us/
6. https://www.jpmorgan.com/insights/global-research/outlook/market-outlook
7. https://www.bscapitalmarkets.com/hungry-for-commodities-ndash-is-a-new-commodity-super-cycle-here.html

|
| |
| |
"Moltbot (formerly Clawdbot), a personal AI assistant, has gone viral within weeks of its launch, drawing thousands of users willing to tackle the technical setup required, even though it started as a scrappy personal project built by one developer for his own use." - Moltbot (formerly Clawdbot)
Moltbot (formerly Clawdbot) is an open-source, self-hosted personal AI assistant that runs continuously on your own hardware (for example a Mac mini, Raspberry Pi, old laptop, or low-cost cloud server) and connects to everyday messaging channels such as WhatsApp, Telegram, iMessage, or similar chat apps so that you can talk to it as if it were a human teammate rather than a traditional app.
Instead of living purely in the cloud like many mainstream assistants, it is designed as “an AI that actually does things”: it can execute real commands on your machine, including managing your calendar and email, browsing the web, organizing local files, and running terminal commands or scripts under your control.
At its core, Moltbot is an agentic system: you choose and configure the underlying large language model (Anthropic Claude, OpenAI models, or local models), and Moltbot wraps that model with tools and permissions so that the AI can observe state on your computer, decide on a sequence of actions, and iteratively move from a current state toward a desired state, much closer to a junior digital employee than a simple chatbot.
This agentic design makes it valuable for complex, multi-step workflows such as triaging inbound email, preparing briefings from documents and web sources, or orchestrating routine maintenance tasks, with the human defining objectives and guardrails while the assistant executes within those constraints. The project emphasizes a privacy-first, owner-controlled architecture: your prompts, files, and system access stay on the machine you control, with only model calls leaving the device when you opt to use a remote API, a proposition that has resonated strongly with developers and power users wary of funneling sensitive workstreams through opaque cloud ecosystems.
Moltbot’s origin story reinforces this positioning: it began in late 2025 as a scrappy personal project by Austrian engineer Peter Steinberger, best known for founding PSPDFKit (later rebranded Nutrient), a PDF and document-processing SDK that grew into infrastructure used by hundreds of millions of end users before being acquired by Insight Partners.
After exiting PSPDFKit and stepping away from day-to-day coding, Steinberger described a period of creative exhaustion, only to be pulled back into building when the momentum around modern AI—and especially Anthropic’s Claude models—convinced him he could turn “Claude Code into his computer,” effectively treating an AI coding environment and agent as the primary interface to his machine.
The first iteration of his assistant, Clawdbot (with its mascot character “Clawd,” a playful space lobster inspired by the name Claude), was built astonishingly quickly—early prototypes reportedly took around an hour—and shared as a personal tool that showed how an AI, wired into real system capabilities, could meaningfully reduce friction in managing a digital life.
Once Steinberger released the project publicly, traction was explosive: the repository rapidly attracted tens of thousands of GitHub stars (with some reports noting 50,000–60,000 stars within weeks), a fast-growing contributor base, and an active community Discord, as developers experimented with running Moltbot as a 24/7 “full-time AI employee” on cheap hardware.
Media coverage highlighted its distinctive blend of autonomy and practicality—“Claude with hands” rather than just a conversational agent—and its appeal to technically sophisticated users willing to accept a more involved setup process in exchange for real, system-level leverage over their workflows.
A trademark dispute over the similarity between “Clawd” and Anthropic’s “Claude” forced a rebrand to Moltbot in early 2026, but the underlying architecture, community, and “lobster soul” of the project remained intact, underscoring that the real innovation lies in the pattern of a self-hosted, action-oriented personal AI rather than in the specific name.
From a strategic perspective, Moltbot represents an emergent archetype: the personal AI infrastructure or “personal operating system” where an individual deploys a modular, agentic system on their own stack, integrates it tightly with their tools, and iteratively composes new capabilities over time.
This pattern shifts AI from being a generic productivity overlay to becoming part of the user’s core execution engine: instead of repeatedly solving the same problem, owners encapsulate solutions into reusable modules or “skills” that their assistant can call, turning one-off hacks into compounding leverage across research, coding, administration, and communication workflows.
In practice, this means that Moltbot is less a single product than a reference architecture for what it looks like when an individual or small team runs a persistent, deeply customized AI agent alongside them as a standing capability, blurring the line between software tool, co-worker, and infrastructure.
Strategy theorist: Daniel Miessler and the personal AI infrastructure thesis
Among contemporary strategic thinkers, Daniel Miessler offers one of the most closely aligned conceptual frameworks for understanding what Moltbot represents, through his work on “Personal AI Infrastructure (PAI)” and modular, agentic systems such as his own AI stack named “Kai.”
Miessler approaches AI not as a single application but as an evolving strategic platform: he describes PAI as an architecture built around a simple yet powerful iterative algorithm—current state - desired state via verifiable iteration—implemented through a constellation of agents, tools, and skills that together execute work on the owner’s behalf.
In his model, effective personal AI systems follow a clear hierarchy—goal - code - command-line tools - prompts - agents—so that automation is applied where it creates lasting leverage rather than superficial convenience, a philosophy that mirrors the way Moltbot encourages users first to define what they want done, then wire the assistant into concrete system actions.
Miessler’s backstory helps explain why his thinking is so relevant to Moltbot’s emergence. He is a long-time security and technology practitioner and the author of a widely read blog and podcast focused on the intersection of infosec, technology, and human behavior, where he has chronicled the gradual shift from isolated tools toward integrated, self-improving AI ecosystems.
Over the past several years he has documented building Kai as a unified agentic system to augment his own research and content creation, distilling a set of design principles: treat skills as modular units of domain expertise, maintain a custom history system that captures everything the system learns, and design both permanent specialist agents and dynamic agents that can be composed on demand for specific tasks.
These principles closely parallel what power users now attempt with Moltbot: they create persistent agents for recurring roles (research, coding, operations), attach them to specific tools and datasets, and then spin up temporary, task-specific flows as new problems arise, all running on personal or small-team infrastructure rather than within a vendor’s closed-box SaaS product.
The relationship between Miessler’s strategic ideas and Moltbot is best understood as conceptual rather than personal: Moltbot independently operationalizes many of the architectural patterns Miessler describes, turning the “personal AI infrastructure” thesis into a widely accessible, open-source implementation.
Both center on the same strategic shift: from AI as an occasional assistant that helps draft text, to AI as a continuously running, modular execution layer that acts across a user’s entire digital environment under explicit human objectives and constraints. In this sense, Miessler functions as a strategy theorist of the personal AI era, articulating the logic of agentic, owner-controlled systems, while Moltbot provides a vivid, viral case study of those ideas in practice—demonstrating how a single, well-designed personal AI stack can evolve from a private experiment into a community-driven platform that meaningfully changes how individuals and small firms execute work.
References
1. https://techcrunch.com/2026/01/27/everything-you-need-to-know-about-viral-personal-ai-assistant-clawdbot-now-moltbot/
2. https://metana.io/blog/what-is-moltbot-everything-you-need-to-know-in-2026/
3. https://dev.to/sivarampg/clawdbot-the-ai-assistant-thats-breaking-the-internet-1a47
4. https://www.macstories.net/stories/clawdbot-showed-me-what-the-future-of-personal-ai-assistants-looks-like/
5. https://www.youtube.com/watch?v=U8kXfk8en

|
| |
| |
"My main message here is the following: this is a tsunami hitting the labour market, and even in the best-prepared countries, I don't think we are prepared enough." - Kristalina Georgieva - Managing Director, IMF
Kristalina Georgieva's invocation of a "tsunami" represents far more than rhetorical flourish. Speaking at the World Economic Forum in Davos, the Managing Director of the International Monetary Fund articulated a diagnosis grounded in rigorous empirical analysis: artificial intelligence is not a speculative future threat but an immediate force already reshaping employment across every economy on earth. The metaphor itself carries profound significance-a tsunami denotes not merely disruption but overwhelming force, simultaneity, and inevitability. Critically, Georgieva's acknowledgement that even "best-prepared countries" remain inadequately equipped reveals the unprecedented scale and speed of this transformation.
The Scope of AI's Labour Market Impact
The International Monetary Fund's assessment provides quantifiable dimensions to this disruption. Georgieva's research indicates that 40 per cent of jobs globally will be impacted by artificial intelligence, with each affected role falling into one of three categories: enhancement (where AI augments human capability), elimination (where automation replaces human labour), or transformation (where roles are fundamentally altered). In advanced economies, this figure rises to 60 per cent-a staggering proportion that underscores the concentration of AI disruption in wealthy nations with greater technological infrastructure.
The distinction between jobs "touched" by AI and jobs eliminated proves crucial to understanding Georgieva's analysis. Enhancement and transformation may appear preferable to outright elimination, yet they still demand worker adjustment, skill development, and potentially geographic mobility. A job that is transformed but offers no wage improvement-as Georgieva has noted-may be economically worse for the worker even if technically retained. This nuance separates her analysis from both techno-optimist narratives and catastrophic predictions.
Perhaps most concerning is the asymmetric impact across age cohorts and development levels. Georgieva has specifically warned that AI is "like a tsunami hitting the labour market" for younger people entering the workforce. Entry-level positions-historically the gateway through which workers develop skills, build experience, and establish career trajectories-are precisely those most vulnerable to automation. This threatens to disrupt the intergenerational transmission of economic opportunity that has underpinned social mobility for decades.
Theoretical Foundations: The Labour Economics Lineage
Georgieva's analysis draws on decades of rigorous labour economics scholarship examining technological displacement and labour market adjustment. The intellectual lineage traces to David Autor, a leading MIT economist whose research has fundamentally shaped contemporary understanding of how technological change reshapes employment. Autor's seminal work demonstrates that whilst technology eliminates routine tasks-particularly routine cognitive work-it simultaneously creates demand for new skills and complementary labour. However, this adjustment is neither automatic nor painless; workers displaced from routine cognitive tasks often face years of unemployment or underemployment before transitioning to new roles, if they transition at all.
Autor's research, conducted over more than two decades, reveals a critical pattern: technological disruption creates a "hollowing out" of middle-skill employment. Routine cognitive tasks-data entry, basic accounting, straightforward analysis-have been progressively automated, whilst demand has polarised toward high-skill, high-wage positions and low-skill, low-wage service roles. This pattern, documented extensively in his work on computerisation and wage inequality, provides the empirical foundation for understanding why Georgieva emphasises that AI's impact cannot be left to market forces alone.
Building on Autor's framework, contemporary labour economists have extended analysis to examine the speed and scale of technological transition. The consensus among leading researchers-including Daron Acemoglu of MIT, who has written extensively on the relationship between technology and inequality-is that rapid technological change without deliberate policy intervention tends to exacerbate inequality rather than distribute gains broadly. Acemoglu's work emphasises that technology is not destiny; rather, the distributional outcomes of technological change depend fundamentally on institutional choices, regulatory frameworks, and investment in human capital.
Claudia Goldin, the 2023 Nobel Prize winner in Economics, has contributed essential research on the relationship between education, skills, and labour market outcomes across generations. Her historical analysis demonstrates that periods of rapid technological change have previously required corresponding investments in education and skills development. The gap between technological capability and educational preparedness has historically determined whether technological transitions benefit broad populations or concentrate gains among a narrow elite. Georgieva's warning about inadequate preparedness echoes Goldin's historical findings: without deliberate educational investment, technological transitions produce inequality.
The Productivity Paradox and Global Growth
Georgieva's analysis situates AI within a broader economic context of disappointing productivity growth. Global growth has remained underwhelming in recent years, with productivity growth stagnant except in the United States. This stagnation represents a fundamental economic problem: without productivity growth, living standards stagnate, and governments face fiscal pressures as tax revenues fail to grow with economic output.
AI represents, in Georgieva's assessment, the most potent force for reversing this trend. The IMF calculates that AI could boost global growth between 0.1 and 0.8 per cent annually-a seemingly modest range that carries enormous consequences. A 0.8 per cent productivity gain would restore growth to pre-pandemic levels, fundamentally altering global economic trajectories. Yet this upside scenario depends entirely on successful labour market adjustment and equitable distribution of AI's benefits. If AI generates productivity gains that concentrate wealth whilst displacing workers without adequate transition support, the aggregate growth figures mask profound distributional consequences.
This productivity question connects directly to Georgieva's warning about preparedness. The IMF's research indicates that one in ten jobs in advanced economies already require substantially new skills-a figure that will accelerate as AI deployment expands. Yet educational and training systems globally remain poorly aligned with AI-era skill demands. Northern European countries-particularly Finland, Sweden, and Denmark-have historically invested in continuous skills development and educational flexibility, positioning their workforces better for technological transition. Most other nations, by contrast, maintain educational systems designed for industrial-era employment patterns, where workers acquired specific skills early in their careers and applied them throughout working lives.
The Global Inequality Dimension
Perhaps the most consequential aspect of Georgieva's analysis concerns the "accordion of opportunities"-her term for the diverging economic trajectories between advanced and developing economies. The 60 per cent figure for advanced economies versus 20-26 per cent for low-income countries reflects not merely different levels of AI adoption but fundamentally different economic capacities and institutional frameworks.
Advanced economies possess the infrastructure, capital, and institutional capacity to invest in AI whilst simultaneously managing labour market transition. They have educational systems capable of rapid adaptation, financial resources to fund reskilling programmes, and social safety nets to cushion displacement. Low-income countries risk being left behind-neither benefiting from AI's productivity gains nor receiving the investment in skills and social protection that might cushion displacement. This dynamic threatens to widen the global inequality gap that has been a persistent feature of economic development since the industrial revolution.
Georgieva's concern reflects research by economists including Branko Milanovic, who has documented how technological change interacts with global inequality. Milanovic's work demonstrates that technological transitions have historically benefited capital owners and high-skill workers whilst displacing lower-skill workers. Without deliberate policy intervention-progressive taxation, investment in education, social protection-technological change tends to increase inequality both within and between nations.
The Skills Gap and Educational Mismatch
Georgieva's analysis reveals a critical finding: some countries have more demand for new skills than supply, whilst others have more supply than demand. This mismatch is not random; it reflects decades of educational investment decisions. Northern European countries, which have invested continuously in education and skills development, face less severe skills gaps. Emerging market and developing economies, which have often prioritised other investments, face more significant misalignment between labour supply and employer demand.
The nature of required skills further complicates adjustment. Approximately half of new skills demanded are information technology related-programming, data analysis, AI system management. The remaining skills span management, specific professional qualifications, and crucially, what Georgieva terms "learning how to learn." This last category proves essential because, as she emphasises, policymakers cannot assume they know what jobs of tomorrow will be. Rather than teaching particular knowledge, educational systems must cultivate adaptability and continuous learning capacity.
This pedagogical insight reflects research by Erik Brynjolfsson and Andrew McAfee, economists at MIT who have extensively studied the relationship between technological change and employment. Their research emphasises that in periods of rapid technological change, the ability to learn new skills matters more than possession of specific technical knowledge. Workers who can adapt, learn new tools, and transfer skills across domains fare better than those with deep expertise in narrow domains vulnerable to automation.
The Entry-Level Jobs Crisis
Georgieva's specific warning about entry-level positions deserves particular attention. AI tends to eliminate entry-level functions-the positions through which younger workers historically entered labour markets, developed experience, and progressed to more senior roles. This threatens to disrupt a fundamental mechanism of economic mobility and skills development.
The concern extends beyond immediate employment. Entry-level positions serve crucial functions beyond income generation: they provide work experience, develop professional networks, teach workplace norms and expectations, and signal to employers that workers possess basic competence. When AI eliminates these positions, younger workers face not merely reduced job availability but disrupted pathways to career development. A 25-year-old unable to secure entry-level experience faces substantially different career prospects than one who progresses through conventional career ladders.
Yet Georgieva's data also offers grounds for cautious optimism. Her research indicates that a 1 per cent increase in new skills leads to 1.3 per cent increase in overall employment. This suggests that skill development creates positive spillovers-workers with new skills generate demand for complementary services and lower-skilled labour, expanding employment opportunities across the economy. The fear that AI will shrink total employment, whilst understandable, is not yet supported by empirical evidence. Rather, the challenge is reshaping employment-ensuring that displaced workers can transition to new roles and that new opportunities emerge in sufficient quantity and geographic proximity to displaced workers.
Geopolitical and Strategic Dimensions
Georgieva's warning arrives amid broader economic fragmentation. Trade tensions, geopolitical competition, and the shift from a rules-based global economic order toward competing blocs create additional uncertainty. AI development is increasingly intertwined with strategic competition between major powers, particularly between the United States and China. This geopolitical dimension means that AI's labour market impact cannot be separated from questions of technological sovereignty, supply chain resilience, and economic security.
The strategic competition over AI development creates perverse incentives. Nations may prioritise rapid AI deployment to maintain competitive advantage, even when labour market adjustment remains incomplete. This dynamic could accelerate job displacement without corresponding investment in worker transition support, exacerbating the preparedness gap Georgieva identifies.
Policy Imperatives and the Preparedness Challenge
Georgieva's analysis suggests several imperatives for policymakers. First, labour market adjustment cannot be left to market forces alone; deliberate investment in education, training, and social protection is essential. Second, the distribution of AI's benefits matters as much as aggregate productivity gains; without attention to equity, AI could deepen inequality within and between nations. Third, regulation and ethical frameworks must be established proactively rather than reactively, shaping AI development toward socially beneficial outcomes.
The preparedness challenge Georgieva emphasises reflects a fundamental asymmetry: AI development proceeds at technological pace, whilst educational systems, labour market institutions, and policy frameworks change at institutional pace. Educational systems require years to redesign curricula, train teachers, and produce graduates with new skills. Labour market institutions-unemployment insurance systems, pension arrangements, occupational licensing frameworks-were designed for industrial-era employment patterns and adapt slowly to new realities. Policy frameworks require legislative action, which moves even more slowly.
This temporal mismatch between technological change and institutional adaptation explains why even well-prepared countries remain inadequately equipped. Finland, Sweden, and Denmark-the countries Georgieva identifies as best positioned-have invested continuously in education and skills development, yet even these nations acknowledge that current preparedness remains insufficient for the scale and speed of AI-driven change.
The Broader Economic Context
Georgieva's warning must be understood within the context of her broader economic outlook. The IMF has upgraded global growth projections to 3.3 per cent for 2026 and 3.2 per cent for 2027, yet these figures fall short of pre-pandemic historical averages of 3.8 per cent. The primary constraint on growth is productivity-the output generated per unit of labour and capital. Without productivity growth, economies cannot generate sufficient income growth to fund public services, support ageing populations, or improve living standards.
AI represents the most significant potential source of productivity growth available to policymakers. Yet realising this potential requires not merely deploying AI technology but managing the labour market transition it necessitates. Georgieva's warning that even best-prepared countries remain inadequately equipped reflects recognition that the challenge is not technological but institutional and political-whether societies can muster the will to invest in worker transition, education, and social protection whilst simultaneously deploying transformative technology.
The stakes could hardly be higher. Successful management of AI's labour market impact could restore productivity growth, accelerate global development, and improve living standards broadly. Failure to manage this transition adequately could concentrate AI's benefits among capital owners and high-skill workers whilst displacing millions of workers without adequate transition support, deepening inequality and potentially destabilising societies. Georgieva's metaphor of a tsunami captures this duality: the same force that could lift all boats could also devastate those unprepared for its arrival.
References
1. https://globaladvisors.biz/2026/01/23/quote-kristalina-georgieva-managing-director-imf/
2. https://www.weforum.org/podcasts/meet-the-leader/episodes/ai-skills-global-economy-imf-kristalina-georgieva/
3. https://fortune.com/2026/01/23/imf-chief-warns-ai-tsunami-entry-level-jobs-gen-z-middle-class/
4. https://timesofindia.indiatimes.com/education/careers/news/ai-is-hitting-entry-level-jobs-like-a-tsunami-imf-chief-kristalina-georgieva-urges-students-to-prepare-for-change/articleshow/127381917.cms

|
| |
| |
"The Black-Scholes model (or Black-Scholes-Merton model) is a fundamental mathematical formula that calculates the theoretical fair price of European-style options, using inputs like the underlying stock price, strike price, time to expiration, risk-free interest rate and volatility." - Black Scholes
Black-Scholes Model (Black-Scholes-Merton Model)
The Black-Scholes model, also known as the Black-Scholes-Merton model, is a pioneering mathematical framework for pricing European-style options, which can only be exercised at expiration. It derives a theoretical fair value for call and put options by solving a parabolic partial differential equation—the Black-Scholes equation—under risk-neutral valuation, replacing the asset's expected return with the risk-free rate to eliminate arbitrage opportunities.1,2,5
The model prices a European call option ( C ) as:
C = S_0 N(d_1) - K e^ N(d_2)
where:
- ( S_0 ): current price of the underlying asset (e.g., stock).3,7
- ( K ): strike price.5,7
- ( T ): time to expiration (in years).5,7
- ( r ): risk-free interest rate (constant).3,7
- (\sigma ): volatility of the underlying asset's returns (annualised).2,7
- ( N(\cdot) ): cumulative distribution function of the standard normal distribution.
- d_1 = \frac{\ln(S_0 / K) + (r + \sigma^2 / 2)T}{\sigma \sqrt}
- d_2 = d_1 - \sigma \sqrt1,2,5
A symmetric formula exists for put options. The model assumes log-normal distribution of stock prices, meaning continuously compounded returns are normally distributed:
\ln S_T \sim N\left( \ln S_0 + \left( \mu - \frac{\sigma^2} \right)T, \sigma^2 T \right)
where ( \mu ) is the expected return (replaced by ( r ) in risk-neutral pricing).2
Key Assumptions
The model rests on idealised conditions for mathematical tractability:
- Efficient markets with no arbitrage and continuous trading.1,3
- Log-normal asset returns (prices cannot go negative).2,3
- Constant risk-free rate ( r ) and volatility ( \sigma ).3
- No dividends (original version; later adjusted by replacing ( S_0 ) with ( S_0 e^ ) for continuous dividend yield ( q ), or subtracting present value of discrete dividends).2,3
- No transaction costs, taxes, or short-selling restrictions; frictionless trading with a risky asset (stock) and riskless asset (bond).1,3
- European exercise only (no early exercise).1,5
These enable delta hedging: dynamically adjusting a portfolio of the underlying asset and riskless bond to replicate the option's payoff, making its price unique.1
Extensions and Limitations
- Dividends: Adjust ( S_0 ) to ( S_0 - PV(\text) ) or use yield ( q ).2
- American options: Use Black's approximation, taking the maximum of European prices with/without dividends.2
- Greeks: Measures sensitivities like delta (\Delta = N(d_1)), vega (volatility sensitivity), etc., for risk management.4
Limitations include real-world violations (e.g., volatility smiles, jumps, stochastic rates), but it remains foundational for derivatives trading, valuation (e.g., 409A for startups), and extensions like binomial models.3,5,7
Myron Scholes (b. 1941) is the most directly linked theorist, co-creator of the model and Nobel laureate whose work revolutionised options trading and risk management strategies.
Biography
Born in Timmins, Ontario, Canada, Scholes earned a BA (1962), MA (1964), and PhD (1969) in finance from the University of Chicago, studying under Nobel winners like Merton Miller. He taught at MIT (1968–1972, collaborating with Fischer Black and Robert Merton), Stanford (1973–1996), and later Oxford. In 1990, he co-founded Long-Term Capital Management (LTCM), a hedge fund using advanced models (including Black-Scholes variants) for fixed-income arbitrage, which amassed $4.7 billion in assets before collapsing in 1998 due to leverage and Russian debt crisis—prompting a $3.6 billion Federal Reserve bailout. Scholes received the 1997 Nobel Prize in Economics (shared with Merton; Black deceased), cementing his legacy. He now advises at Platinum Grove Asset Management and philanthropically supports education.1
Relationship to the Term
Scholes co-authored the seminal 1973 paper "The Pricing of Options and Corporate Liabilities" with Fischer Black (1938–1995), an economist at Arthur D. Little and later Goldman Sachs, who conceived the core hedging insight but died before the Nobel. Robert C. Merton (b. 1944, Merton's 1973 paper extended it to dividends and American options) formalised continuous-time aspects, earning co-credit. Their breakthrough—published amid nascent options markets (CBOE opened 1973)—enabled risk-neutral pricing and dynamic hedging, transforming derivatives from speculative to hedgeable instruments. Scholes' strategic insight: options prices reflect volatility alone under no-arbitrage, powering strategies like volatility trading, portfolio insurance, and structured products at banks/hedge funds. LTCM exemplified (and exposed limits of) scaling these via leverage.1,2,5
References
1. https://en.wikipedia.org/wiki/Black%E2%80%93Scholes_model
2. https://analystprep.com/study-notes/frm/part-1/valuation-and-risk-management/the-black-scholes-merton-model/
3. https://carta.com/learn/startups/equity-management/black-scholes-model/
4. https://www.columbia.edu/~mh2078/FoundationsFE/BlackScholes.pdf
5. https://www.sofi.com/learn/content/what-is-the-black-scholes-model/
6. https://gregorygundersen.com/blog/2024/09/28/black-scholes/
7. https://corporatefinanceinstitute.com/resources/derivatives/black-scholes-merton-model/
8. https://www.youtube.com/watch?v=EEM2YBzH-2U
9. https://www.khanacademy.org/economics-finance-domain/core-finance/derivative-securities/black-scholes/v/introduction-to-the-black-scholes-formula

|
| |
|