Select Page

News and Tools

Breaking Business News

 

Our selection of the top business news sources on the web.

Quote: Andrew Ng – AI guru, Coursera founder

Quote: Andrew Ng – AI guru, Coursera founder

“I think one of the challenges is, because AI technology is still evolving rapidly, the skills that are going to be needed in the future are not yet clear today. It depends on lifelong learning.” – Andrew Ng – AI guru, Coursera founder

Delivered during a session on Corporate Ladders, AI Reshuffled at the World Economic Forum in Davos in January 2026, this insight from Andrew Ng captures the essence of navigating an era where artificial intelligence advances at breakneck speed. Ng’s words underscore a pivotal shift: as AI reshapes jobs and workflows, the uncertainty of future skills demands a commitment to continuous adaptation1,2.

Andrew Ng: The Architect of Modern AI Education

Andrew Ng stands as one of the foremost figures in artificial intelligence, often dubbed an AI guru for his pioneering contributions to machine learning and online education. A British-born computer scientist, Ng co-founded Coursera in 2012, revolutionising access to higher education by partnering with top universities to offer massive open online courses (MOOCs). His platforms, including DeepLearning.AI and Landing AI, have democratised AI skills, training millions worldwide2,3.

Ng’s career trajectory is marked by landmark roles: he led the Google Brain project, which advanced deep learning at scale, and served as chief scientist at Baidu, applying AI to real-world applications in search and autonomous driving. As managing general partner at AI Fund, he invests in startups bridging AI with practical domains. At Davos 2026, Ng addressed fears of AI-driven job losses, arguing they are overstated. He broke jobs into tasks, noting AI handles only 30-40% currently, boosting productivity for those who adapt: ‘A person that uses AI will be so much more productive, they will replace someone that doesn’t use AI’2,3. His emphasis on coding as a ‘durable skill’-not for becoming engineers, but for building personalised software to automate workflows-aligns directly with the quoted challenge of unclear future skills1.

The Broader Context: AI’s Impact on Jobs and Skills at Davos 2026

The quote emerged amid Davos discussions on agentic AI systems-autonomous agents managing end-to-end workflows-pushing humans towards oversight, judgement, and accountability. Ng highlighted meta-cognitive agility: shifting from perishable technical skills to ‘learning to learn’1. This resonates with global concerns; IMF’s Kristalina Georgieva noted one in ten jobs in advanced economies already need new skills, with labour markets unprepared1. Ng urged upskilling, especially for regions like India, warning its IT services sector risks disruption without rapid AI literacy3,5.

Corporate strategies are evolving: the T-shaped model promotes AI literacy across functions (breadth) paired with irreplaceable domain expertise (depth). Firms rebuild talent ladders, replacing grunt work with AI-supported apprenticeships fostering early decision-making1. Ng’s optimism tempers hype; AI improves incrementally, not in dramatic leaps, yet demands proactive reskilling3.

Leading Theorists Shaping AI, Skills, and Lifelong Learning

Ng’s views build on foundational theorists in AI and labour economics:

  • Geoffrey Hinton, Yann LeCun, and Yoshua Bengio (the ‘Godfathers of AI’): Pioneered deep learning, enabling today’s breakthroughs. Hinton, Ng’s early collaborator at Google Brain, warns of AI risks but affirms its transformative potential for productivity2. Their work underpins Ng’s task-based job analysis.
  • Erik Brynjolfsson and Andrew McAfee (MIT): In ‘The Second Machine Age’, they theorise how digital technologies complement human skills, amplifying ‘non-routine’ cognitive tasks. This mirrors Ng’s productivity shift, where AI augments rather than replaces1,2.
  • Carl Benedikt Frey and Michael Osborne (Oxford): Their 2013 study quantified automation risks for 702 occupations, sparking debates on reskilling. Ng extends this by focusing on partial automation (30-40%) and lifelong learning imperatives2.
  • Daron Acemoglu (MIT): Critiques automation’s wage-polarising effects, advocating ‘so-so technologies’ that automate mid-skill tasks. Ng counters with optimism for human-AI collaboration via upskilling3.

These theorists converge on a consensus: AI disrupts routines but elevates human judgement, creativity, and adaptability-skills honed through lifelong learning, as Ng advocates.

Ng’s prescience positions this quote as a clarion call for individuals and organisations to embrace uncertainty through perpetual growth in an AI-driven world.

References

1. https://globaladvisors.biz/2026/01/23/the-ai-signal-from-the-world-economic-forum-2026-at-davos/

2. https://www.storyboard18.com/brand-makers/davos-2026-andrew-ng-says-fears-of-ai-driven-job-losses-are-exaggerated-87874.htm

3. https://www.moneycontrol.com/news/business/davos-summit/davos-2026-ai-is-continuously-improving-despite-perception-that-excitement-has-faded-says-andrew-ng-13780763.html

4. https://www.aicerts.ai/news/andrew-ng-open-source-ai-india-call-resonates-at-davos/

5. https://economictimes.com/tech/artificial-intelligence/india-must-speed-up-ai-upskilling-coursera-cofounder-andrew-ng/articleshow/126703083.cms

"I think one of the challenges is, because AI technology is still evolving rapidly, the skills that are going to be needed in the future are not yet clear today. It depends on lifelong learning." - Quote: Andrew Ng - AI guru. Coursera founder

read more
Term: Steelman argument

Term: Steelman argument

“A steelman argument is a dialectical technique where you restate an opponent’s position in its strongest, most charitable, and most convincing form, even better than they presented it, before you offer your counterargument, aiming to understand the truth and engage.” – Steelman argument

The purpose is not to score rhetorical points, but to understand the underlying truth of the issue, test your own beliefs, and engage respectfully and productively with those who disagree.

In a steelman argument, a participant in a discussion:

  • Listens carefully to the other side’s position, reasons, evidence, and concerns.
  • Reconstructs that position as logically, factually, and rhetorically strong as possible, eliminating obvious errors, clarifying ambiguities, and adding reasonable supporting considerations.
  • Checks this reconstruction with the opponent to ensure it is both accurate and recognisable – ideally something they would endorse or even prefer to their original wording.
  • Only then advances their own critique, counterarguments, or alternative proposals, addressing this improved version rather than a weaker one.

This makes steelmanning the conceptual opposite of the straw man fallacy, where a position is caricatured or simplified to make it easier to attack. Where a straw man trades on distortion to make disagreement easier, a steelman trades on fairness and intellectual generosity to make understanding deeper.

Core principles of steelmanning

Four principles underpin effective steelman arguments:

  • Charity – You interpret your counterpart’s words in the most reasonable light, attributing to them the most coherent and defensible version of their view, rather than assuming confusion, bad faith, or ignorance.
  • Accuracy – You preserve the core commitments, values, and intended meaning of their position; you do not quietly change what is at stake, even while you improve its structure and support.
  • Strengthening – You explicitly look for the best reasons, analogies, and evidence that could support their view, including arguments they have not yet articulated but would plausibly accept.
  • Verification – You invite your interlocutor to confirm or refine your restatement, aiming for the moment when they can honestly say, “Yes, that is what I mean – and that is an even better version of my view than I initially gave.”

Steelman vs. straw man vs. related techniques

Concept What it does Typical intention
Steelman argument Strengthens and clarifies the opposing view before critiquing it. Seek truth, understand deeply, and persuade through fairness.
Straw man fallacy Misrepresents or oversimplifies a view to make it easier to refute. Win a debate, create rhetorical advantage, or avoid hard questions.
Devil’s advocate Adopts a contrary position (not necessarily sincerely held) to expose weaknesses or overlooked risks. Stress-test prevailing assumptions, foster critical thinking.
Thought experiment / counterfactual Explores hypothetical scenarios to test principles or intuitions. Clarify implications, reveal hidden assumptions, probe edge cases.

Steelman arguments often incorporate elements of counterfactuals and thought experiments. For example, to strengthen a policy criticism, you might ask: “Suppose this policy were applied in a more extreme case – would the same concerns still hold?” You then build the best version of the concern across such scenarios before responding.

Why steelmanning matters in strategy and decision-making

In strategic analysis, investing, policy design, and complex organisational decisions, steelman arguments help to:

  • Reduce confirmation bias by forcing you to internalise the strongest objections to your preferred view.
  • Improve risk management by properly articulating downside scenarios and adverse stakeholder perspectives before discarding them.
  • Enhance credibility with boards, clients, and teams, who see that arguments have been tested against serious, not superficial, opposition.
  • Strengthen strategy by making sure that chosen options have survived comparison with the most powerful alternatives, not just weakly framed ones.

When used rigorously, the steelman discipline often turns a confrontational debate into a form of collaborative problem-solving, where each side helps the other refine their views and the final outcome is more robust than either starting position.

Practical steps to construct a steelman argument

A practical steelmanning process in a meeting, negotiation, or analytical setting might look like this:

  • 1. Elicit and clarify
    Ask the other party to explain their view fully. Use probing but neutral questions: “What is the central concern?”, “What outcomes are you trying to avoid?”, “What evidence most strongly supports your view?”
  • 2. Map and organise
    Identify their main claims, supporting reasons, implicit assumptions, and key examples. Group these into a coherent structure, ranking the arguments from strongest to weakest.
  • 3. Strengthen
    Add reasonable premises they may have missed, improve their examples, and fill gaps with the best available data or analogies that genuinely support their position.
  • 4. Restate back
    Present your reconstructed version, starting with a phrase such as, “Let me try to state your view as strongly as I can.” Invite correction until they endorse it.
  • 5. Engage and test
    Only once agreement on the steelman is reached do you introduce counterarguments, alternative hypotheses, or different scenarios – always addressing the strong version rather than retreating to weaker caricatures.

Best related strategy theorist: John Stuart Mill

Although the term “steelman” is modern, the deepest intellectual justification for the practice in strategy, policy, and public reasoning comes from the nineteenth-century philosopher and political economist John Stuart Mill. His work provides a powerful conceptual foundation for steelmanning, especially in high-stakes decision contexts.

Mill’s connection to steelmanning

Mill argued that you cannot truly know your own position unless you also understand, in its most persuasive form, the best arguments for the opposing side. He insisted that anyone who only hears or articulates one side of a case holds their opinion as a “prejudice” rather than a reasoned view. In modern terms, he is effectively demanding that responsible thinkers and decision-makers steelman their opponents before settling on a conclusion.

In his work on liberty, representative government, and political economy, Mill repeatedly:

  • Reconstructed opposing positions in detail, often giving them more systematic support than their own advocates had provided.
  • Explored counterfactual scenarios and hypotheticals to see where each argument would succeed or fail.
  • Treated thoughtful critics as partners in the search for truth rather than as enemies to be defeated.

This method aligns closely with the steelman ethos in modern strategy work: before committing to a policy, investment, or organisational move, you owe it to yourself and your stakeholders to understand the most credible case against your intended path – not a caricature of it.

Biography and intellectual context

John Stuart Mill (1806 – 1873) was an English philosopher, economist, and civil servant, widely regarded as one of the most influential thinkers in the liberal tradition. Educated intensively from a very young age by his father, James Mill, under the influence of Jeremy Bentham, he mastered classical languages, logic, and political economy in his childhood, but suffered a mental crisis in his early twenties that led him to broaden his outlook beyond strict utilitarianism.

Mill’s major works include:

  • System of Logic, where he analysed how we form and test hypotheses, including the role of competing explanations.
  • On Liberty, which defended freedom of thought, speech, and experimentation in ways that presuppose an active culture of hearing and strengthening opposing views.
  • Principles of Political Economy, a major text that carefully considers economic arguments from multiple sides before reaching policy conclusions.

As a senior official in the East India Company and later a Member of Parliament, Mill moved between theory and practice, applying his analytical methods to real-world questions of governance, representation, and reform. His insistence that truth and sound policy emerge only from confronting the strongest counter-arguments is a direct ancestor of the modern steelman method in strategic reasoning, board-level debate, and public policy design.

Mill’s legacy for modern strategic steelmanning

For contemporary strategists, investors, and leaders, Mill’s legacy can be summarised as a disciplined demand: before acting, ensure that you could state the best good-faith case against your intention more clearly and powerfully than its own advocates. Only then is your subsequent decision genuinely informed rather than insulated by bias.

In this way, John Stuart Mill stands as the key historical theorist behind the steelman argument – not for coining the term, but for articulating the intellectual and ethical duty to engage with opponents at their strongest, in pursuit of truth and resilient strategy.

References

1. https://aliabdaal.com/newsletter/the-steelman-argument/

2. https://themindcollection.com/steelmanning-how-to-discover-the-truth-by-helping-your-opponent/

3. https://ratiochristi.org/the-anatomy-of-persuasion-the-steel-man/

4. https://www.youtube.com/watch?v=veeGKTzbYjc

5. https://simplicable.com/en/steel-man

6. https://umbrex.com/resources/tools-for-thinking/what-is-steelmanning/

"A steelman argument is a dialectical technique where you restate an opponent's position in its strongest, most charitable, and most convincing form, even better than they presented it, before you offer your counterargument, aiming to understand the truth and engage." - Term: Steelman argument

read more
Quote: Professor Hannah Fry – University of Cambridge

Quote: Professor Hannah Fry – University of Cambridge

“Humans are not very good at exponentials. And right now, at this moment, we are standing right on the bend of the curve. AGI is not a distant thought experiment anymore.” – Professor Hannah Fry – Univeristy of Cambridge

The quote comes at the end of a wide?ranging conversation between applied mathematician and broadcaster Professor Hannah Fry and DeepMind co?founder Shane Legg, recorded for the “Google DeepMind, the podcast” series in late 2025. Fry is reflecting on Legg’s decades?long insistence that artificial general intelligence would arrive much sooner than most experts expected, and on his argument that its impact will be structurally comparable to the Industrial Revolution: a technology that reshapes work, wealth, and the basic organisation of society rather than just adding another digital tool. Her remark that “humans are not very good at exponentials” is a pointed reminder of how easily people misread compounding processes, from pandemics to technological progress, and therefore underestimate how quickly “next decade” scenarios can become “this quarter” realities.?

Context of the quote

Fry’s line follows a discussion in which Legg lays out a stepwise picture of AI progress: from today’s uneven but impressive systems, through “minimal AGI” that can reliably perform the full range of ordinary human cognitive tasks, to “full AGI” capable of the most exceptional creative and scientific feats, and then on to artificial superintelligence that eclipses human capability in most domains. Throughout, Legg stresses that current models already exceed humans in language coverage, encyclopaedic knowledge and some kinds of problem solving, while still failing at basic visual reasoning, continual learning, and robust commonsense. The trajectory he sketches is not a gentle slope but a sharpening curve, driven by scaling laws, data, architectures and hardware; Fry’s “bend of the curve” image captures the moment when such a curve stops looking linear to human intuition and starts to feel suddenly, uncomfortably steep.?

That curve is not just about raw capability but about diffusion into the economy. Legg argues that over the next few years, AI will move from being a helpful assistant to doing a growing share of economically valuable work—starting with software engineering and other high?paid cognitive roles that can be done entirely through a laptop. He anticipates that tasks once requiring a hundred engineers might soon be done by a small team amplified by advanced AI tools, with similarly uneven but profound effects across law, finance, research, and other knowledge professions. By the time Fry delivers her closing reflection, the conversation has moved from technical definitions to questions of social contract: how to design a post?AGI economy, how to distribute the gains from machine intelligence, and how to manage the transition period in which disruption and opportunity coexist.?

Hannah Fry: person and perspective

Hannah Fry is a professor in the mathematics of cities who has built a public career explaining complex systems—epidemics, finance, urban dynamics and now AI—to broad audiences. Her training in applied mathematics and complexity science has made her acutely aware of how exponential processes play out in the real world, from contagion curves during COVID?19 to the compounding effect of small percentage gains in algorithmic performance and hardware efficiency. She has repeatedly highlighted the cognitive bias that leads people to underreact when growth is slow and overreact when it becomes visibly explosive, a theme she explicitly connects in this podcast to the early days of the pandemic, when warnings about exponential infection growth were largely ignored while life carried on as normal.?

In the AGI conversation, Fry positions herself as an interpreter between technical insiders and a lay audience that is already experiencing AI in everyday tools but may not yet grasp the systemic implications. Her remark that the general public may, in some sense, “get it” better than domain specialists echoes Legg’s observation that non?experts sometimes see current systems as already effectively “intelligent,” while many professionals in affected fields downplay the relevance of AI to their own work. When she says “AGI is not a distant thought experiment anymore,” she is distilling Legg’s timelines—his long?standing 50/50 prediction of minimal AGI by 2028, followed by full AGI within a decade—into a single, accessible warning that the window for slow institutional adaptation is closing.?

Meaning of “not very good at exponentials”

The specific phrase “humans are not very good at exponentials” draws on a familiar insight from behavioural economics and cognitive psychology: people routinely misjudge exponential growth, treating it as if it were linear. During the COVID?19 pandemic, this manifested in the gap between early warnings about exponential case growth and the public’s continued attendance at large events right up until visible crisis hit, an analogy Fry explicitly invokes in the episode. In technology, the same bias leads organisations to plan as if next year will look like this year plus a small increment, even when underlying drivers—compute, algorithmic innovation, investment, data availability—are compounding at rates that double capabilities over very short horizons.?

Fry’s “bend of the curve” language marks the point where incremental improvements accumulate to the point that qualitative change becomes hard to ignore: AI systems not only answering questions but autonomously writing production code, conducting literature reviews, proposing experiments, or acting as agents in the world. At that bend, the lag between capability and governance becomes a central concern; Legg emphasises that there will not be enough time for leisurely consensus?building once AGI is fully realised, hence his call for every academic discipline and sector—law, education, medicine, city planning, economics—to begin serious scenario work now. Fry’s closing comment translates that call into a general admonition: exponential technologies demand anticipatory thinking, not reactive crisis management.?

Leading theorists behind the ideas

The intellectual backdrop to Fry’s quote and Legg’s perspectives on AGI blends several strands of work in AI theory, safety and the study of technological revolutions.

  • Shane Legg and Ben Goertzel helped revive and popularise the term “artificial general intelligence” in the early 2000s to distinguish systems aimed at broad, human?like cognitive competence from “narrow AI” optimised for specific tasks. Legg’s own academic work, influenced by his supervisor Marcus Hutter, explores formal definitions of universal intelligence and the conditions under which machine systems could match or exceed human problem?solving across many domains.?

  • I. J. Good introduced the “intelligence explosion” hypothesis in 1965, arguing that a sufficiently advanced machine intelligence capable of improving its own design could trigger a runaway feedback loop of ever?greater capability. This notion of recursive self?improvement underpins much of the contemporary discourse about AI timelines and the risks associated with crossing particular capability thresholds.?

  • Eliezer Yudkowsky developed thought experiments and early arguments about AGI’s existential risks, emphasising that misaligned superintelligence could be catastrophically dangerous even if human developers never intended harm. His writing helped seed the modern AI safety movement and influenced researchers and entrepreneurs who later entered mainstream organisations.?

  • Nick Bostrom synthesised and formalised many of these ideas in “Superintelligence: Paths, Dangers, Strategies,” providing widely cited scenarios in which AGI rapidly transitions into systems whose goals and optimisation power outstrip human control. Bostrom’s work is central to Legg’s concern with how to steer AGI safely once it surpasses human intelligence, especially around questions of alignment, control and long?term societal impact.?

  • Geoffrey Hinton, Stuart Russell and other AI pioneers have added their own warnings in recent years: Hinton has drawn parallels between AI and other technologies whose potential harms were recognized only after wide deployment, while Russell has argued for a re?founding of AI as the science of beneficial machines explicitly designed to be uncertain about human preferences. Their perspectives reinforce Legg’s view that questions of ethics, interpretability and “System 2 safety”—ensuring that advanced systems can reason transparently about moral trade?offs—are not peripheral but central to responsible AGI development.?

Together, these theorists frame AGI as both a continuation of a long scientific project to build thinking machines and as a discontinuity in human history whose effects will compound faster than our default intuitions allow. In that context, Fry’s quote reads less as a rhetorical flourish and more as a condensed thesis: exponential dynamics in intelligence technologies are colliding with human cognitive biases and institutional inertia, and the moment to treat AGI as a practical, near?term design problem rather than a speculative future is now.?

References

https://eeg.cl.cam.ac.uk
https://en.wikipedia.org/wiki/Shane_Legg
https://www.youtube.com/watch?v=kMUdrUP-QCs
https://www.ibm.com/think/topics/artificial-general-intelligence
https://kingy.ai/blog/exploring-the-concept-of-artificial-general-intelligence-agi/
https://jetpress.org/v25.2/goertzel.pdf
https://www.dce.va/content/dam/dce/resources/en/digital-cultures/Encountering-AI—Ethical-and-Anthropological-Investigations.pdf
https://arxiv.org/pdf/1707.08476.pdf
https://hermathsstory.eu/author/admin/page/7/
https://www.shunryugarvey.com/wp-content/uploads/2021/03/YISR_I_46_1-2_TEXT_P-1.pdf
https://dash.harvard.edu/bitstream/handle/1/37368915/Nina%20Begus%20Dissertation%20DAC.pdf?sequence=1&isAllowed=y
https://www.facebook.com/groups/lifeboatfoundation/posts/10162407288283455/
https://globaldashboard.org/economics-and-development/
https://www.forbes.com/sites/gilpress/2024/03/29/artificial-general-intelligence-or-agi-a-very-short-history/
https://ebe.uct.ac.za/sites/default/files/content_migration/ebe_uct_ac_za/169/files/WEB%2520UCT%2520CHEM%2520D023%2520Centenary%2520Design.pdf

 

"Humans are not very good at exponentials. And right now, at this moment, we are standing right on the bend of the curve. AGI is not a distant thought experiment anymore." - Quote: Professor Hannah Fry

read more
Quote: Andrew Ng – AI guru, Coursera founder

Quote: Andrew Ng – AI guru, Coursera founder

“There’s one skill that is already emerging… it’s time to get everyone to learn to code…. not just the software engineers, but the marketers, HR professionals, financial analysts, and so on – the ones that know how to code are much more productive than the ones that don’t, and that gap is growing.” – Andrew Ng – AI guru, Coursera founder

In a forward-looking discussion at the World Economic Forum’s 2026 session on ‘Corporate Ladders, AI Reshuffled’, Andrew Ng passionately advocates for coding as the pivotal skill defining productivity in the AI era. Delivered in January 2026, this insight underscores how AI tools are democratising coding, enabling professionals beyond software engineering to harness technology for greater efficiency1. Ng’s message aligns with his longstanding mission to make advanced technology accessible through education and practical application.

Who is Andrew Ng?

Andrew Ng stands as one of the foremost figures in artificial intelligence, renowned for bridging academia, industry, and education. A British-born computer scientist, he earned his PhD from the University of California, Berkeley, and has held prestigious roles including adjunct professor at Stanford University. Ng co-founded Coursera in 2012, revolutionising online learning by offering courses to millions worldwide, including his seminal ‘Machine Learning’ course that has educated over 4 million learners. He led Google Brain, Google’s deep learning research project, from 2011 to 2014, pioneering applications that advanced AI capabilities across industries. Currently, as founder of Landing AI and DeepLearning.AI, Ng focuses on enterprise AI solutions and accessible education platforms. His influence extends to executive positions at Baidu and as a venture capitalist investing in AI startups1,2.

Context of the Quote

The quote emerges from Ng’s reflections on AI’s transformative impact on workflows, particularly at the WEF 2026 event addressing how AI reshuffles corporate structures. Here, Ng highlights ‘vibe coding’-AI-assisted coding that lowers barriers, allowing non-engineers like marketers, HR professionals, and financial analysts to prototype ideas rapidly without traditional hand-coding. He argues this boosts productivity and creativity, warning that the divide between coders and non-coders will widen. Recent talks, such as at Snowflake’s Build conference, reinforce this: ‘The bar to coding is now lower than it ever has been. People that code… will really get more done’1. Ng critiques academia for lagging behind, noting unemployment among computer science graduates due to outdated curricula ignoring AI tools, and stresses industry demand for AI-savvy talent1,2.

Leading Theorists and the Broader Field

Ng’s advocacy builds on foundational AI theories while addressing practical upskilling. Pioneers like Geoffrey Hinton, often called the ‘Godfather of Deep Learning’, laid groundwork through backpropagation and neural networks, influencing Ng’s Google Brain work. Hinton, Ng’s former advisor at Stanford, warns of AI’s job displacement risks but endorses human-AI collaboration. Yann LeCun, Meta’s Chief AI Scientist, complements this with convolutional neural networks essential for computer vision, emphasising open-source AI for broad adoption. Fei-Fei Li, ‘Godmother of AI’, advanced image recognition and co-directs Stanford’s Human-Centered AI Institute, aligning with Ng’s educational focus.

In skills discourse, World Economic Forum’s Future of Jobs Report 2025 projects technological skills, led by AI and big data, as fastest-growing in importance through 2030, alongside lifelong learning3. Microsoft CEO Satya Nadella echoes: ‘AI won’t replace developers, but developers who use AI will replace those who don’t’3. Nvidia’s Jensen Huang and Klarna’s Sebastian Siemiatkowski advocate AI agents and tools like Cursor, predicting hybrid human-AI teams1. Ng’s tips-take AI courses, build systems hands-on, read papers-address a talent crunch where 51% of tech leaders struggle to find AI skills2.

Implications for Careers and Workflows

  • AI-Assisted Coding: Tools like GitHub Copilot, Cursor, and Replit enable ‘agentic development’, delegating routine tasks to AI while humans focus on creativity1,3.
  • Universal Upskilling: Ng urges structured learning via platforms like Coursera, followed by practice, as theory alone insufficient-like studying aeroplanes without flying2.
  • Industry Shifts: Companies like Visa and DoorDash now require AI code generator experience; polyglot programming (Python, Rust) and prompt engineering rise1,3.
  • Warnings: Despite optimism, experts like Stuart Russell caution AI could disrupt 80% of jobs, underscoring adaptive skills2.

Ng’s vision positions coding not as a technical niche but a universal lever for productivity in an AI-driven world, urging immediate action to close the growing gap.

References

1. https://timesofindia.indiatimes.com/technology/tech-news/google-brain-founder-andrew-ng-on-why-it-is-still-important-to-learn-coding/articleshow/125247598.cms

2. https://www.finalroundai.com/blog/andrew-ng-ai-tips-2026

3. https://content.techgig.com/career-advice/top-10-developer-skills-to-learn-in-2026/articleshow/125129604.cms

4. https://www.coursera.org/in/articles/ai-skills

5. https://www.idnfinancials.com/news/58779/ai-expert-andrew-ng-programmers-are-still-needed-in-a-different-way

"There's one skill that is already emerging... it's time to get everyone to learn to code.... not just the software engineers, but the marketers, HR professionals, financial analysts, and so on - the ones that know how to code are much more productive than the ones that don't, and that gap is growing." - Quote: Andrew Ng - AI guru, Coursera founder

read more
Term: Counterfactual

Term: Counterfactual

“A counterfactual is a hypothetical scenario or statement that considers what would have happened if a specific event or condition had been different from what actually occurred. In simple terms, it is a ‘what if’ or ‘if only’ thought process that contradicts the established facts.” – Counterfactual

A counterfactual is a hypothetical scenario or statement that imagines what would have happened if a specific event, condition, or action had differed from what actually occurred. It represents a ‘what if’ or ‘if only’ thought process that directly contradicts established facts, enabling exploration of alternative possibilities for past or future events.

Counterfactual thinking involves mentally simulating outcomes contrary to reality, such as ‘If I had not taken that sip of hot coffee, I would not have burned my tongue.’ This cognitive process is common in reflection on mistakes, regrets, or opportunities, like pondering ‘If only I had caught that flight, my career might have advanced differently.’1,2,3

Key Characteristics and Types

  • Additive vs. Subtractive: Additive counterfactuals imagine adding an action (e.g., ‘If I had swerved, the accident would have been avoided’), while subtractive ones remove one (e.g., ‘If the child had not cried, I would have focused on the road’).3
  • Upward vs. Downward: Upward focuses on better alternatives, often leading to regret; downward considers worse ones, fostering relief.3
  • Mutable vs. Immutable: People tend to mutate exceptional or controllable events in their imaginings.1

Applications Across Disciplines

In causal inference, counterfactuals estimate effects by comparing observed outcomes to hypothetical ones, such as ‘What would the yield be if a different treatment was applied to this plot?’ They underpin concepts like potential outcomes in statistics.4,7

In philosophy and logic, counterfactuals are analysed as conditionals where the antecedent is false, symbolised as A ?? C (if A were the case, C would be), contrasting with material implications.6

In machine learning, counterfactual explanations clarify model decisions, e.g., ‘If feature X changed to value x, the prediction would shift.’2

Everyday examples include regretting a missed job (‘If I had not been late, I would have that promotion’) or entrepreneurial reflection (‘If we chose a different partner, the startup might have succeeded’).3

Leading Theorist: Judea Pearl

The most influential modern theorist linking counterfactuals to strategy is Judea Pearl, a pioneering computer scientist and philosopher whose causal inference framework revolutionised how counterfactuals inform decision-making, policy analysis, and strategic planning.

Biography: Born in 1936 in Tel Aviv, Pearl emigrated to the US in 1960 after studying electrical engineering in Israel. He earned a PhD from Rutgers University in 1965 and joined UCLA, where he is now a professor emeritus. Initially focused on AI and probabilistic reasoning, Pearl developed Bayesian networks in the 1980s, earning the Turing Award in 2011 for advancing AI through probability and causality.

Relationship to Counterfactuals: Pearl’s seminal work, Probabilistic Reasoning in Intelligent Systems (1988) and Causality (2000), formalised counterfactuals using structural causal models (SCMs). He defined the counterfactual query ‘Y would be y had X been x’ via do-interventions and potential outcomes, e.g., Y_x(u) = y denotes the value Y takes under intervention do(X=x) in unit u’s background context.4 This ‘ladder of causation’-from association to intervention to counterfactuals-enables strategic ‘what if’ analysis, such as evaluating policy impacts or business decisions by computing missing data: ‘Given observed E=e, what is expected Y if X differed?’4

Pearl’s framework aids strategists in risk assessment, A/B testing, and scenario planning, distinguishing correlation from causation. His do-calculus provides computable algorithms for counterfactuals, making them practical tools beyond mere speculation.4,7

References

1. https://conceptually.org/concepts/counterfactual-thinking

2. https://christophm.github.io/interpretable-ml-book/counterfactual.html

3. https://helpfulprofessor.com/counterfactual-thinking-examples/

4. https://bayes.cs.ucla.edu/PRIMER/primer-ch4.pdf

5. https://www.merriam-webster.com/dictionary/counterfactual

6. https://plato.stanford.edu/entries/counterfactuals/

7. https://causalwizard.app/inference/article/counterfactual

"A counterfactual is a hypothetical scenario or statement that considers what would have happened if a specific event or condition had been different from what actually occurred. In simple terms, it is a 'what if' or 'if only' thought process that contradicts the established facts." - Term: Counterfactual

read more
Quote: Wingate, et al – MIT SMR

Quote: Wingate, et al – MIT SMR

“It is tempting for a company to believe that it will somehow benefit from AI while others will not, but history teaches a different lesson: Every serious technical advance ultimately becomes equally accessible to every company.” – Wingate, et al – MIT SMR

The Quote in Context

David Wingate, Barclay L. Burns, and Jay B. Barney’s assertion that companies cannot sustain competitive advantage through AI alone represents a fundamental challenge to prevailing business orthodoxy. Their observation-that every serious technical advance ultimately becomes equally accessible-draws from decades of technology adoption patterns and competitive strategy theory. This insight, published in the MIT Sloan Management Review in 2025, cuts through the hype surrounding artificial intelligence to expose a harder truth: technological parity, not technological superiority, is the inevitable destination.

The Authors and Their Framework

David Wingate, Barclay L. Burns, and Jay B. Barney

The three researchers who authored this influential piece bring complementary expertise to the question of sustainable competitive advantage. Their collaboration represents a convergence of strategic management theory and practical business analysis. By applying classical frameworks of competitive advantage to the contemporary AI landscape, they demonstrate that the fundamental principles governing technology adoption have not changed, even as the technology itself has become more sophisticated and transformative.

Their central thesis rests on a deceptively simple observation: artificial intelligence, like the internet, semiconductors, and electricity before it, possesses a critical characteristic that distinguishes it from sources of lasting competitive advantage. Because AI is fundamentally digital, it is inherently copyable, scalable, repeatable, predictable, and uniform. This digital nature means that any advantage derived from AI adoption will inevitably diffuse across the competitive landscape.

The Three Tests of Sustainable Advantage

Wingate, Burns, and Barney employ a rigorous analytical framework derived from resource-based theory in strategic management. They argue that for any technology to confer sustainable competitive advantage, it must satisfy three criteria simultaneously:

  • Valuable: The technology must create genuine economic value for the organisation
  • Unique: The technology must be unavailable to competitors
  • Inimitable: Competitors must be unable to replicate the advantage

Whilst AI unquestionably satisfies the first criterion-it is undeniably valuable-it fails the latter two. No organisation possesses exclusive access to AI technology, and the barriers to imitation are eroding rapidly. This analytical clarity explains why even early adopters cannot expect their advantages to persist indefinitely.

Historical Precedent and Technology Commoditisation

The Pattern of Technical Diffusion

The authors’ invocation of historical precedent is not merely rhetorical flourish; it reflects a well-documented pattern in technology adoption. When electricity became widely available, early industrial adopters gained temporary advantages in productivity and efficiency. Yet within a generation, electrical power became a commodity-a baseline requirement rather than a source of differentiation. The same pattern emerged with semiconductors, computing power, and internet connectivity. Each represented a genuine transformation of economic capability, yet each eventually became universally accessible.

This historical lens reveals a crucial distinction between transformative technologies and sources of competitive advantage. A technology can fundamentally reshape an industry whilst simultaneously failing to provide lasting differentiation for any single competitor. The value created by the technology accrues to the market as a whole, lifting all participants, rather than concentrating advantage in the hands of early movers.

The Homogenisation Effect

Wingate, Burns, and Barney emphasise that AI will function as a source of homogenisation rather than differentiation. As AI capabilities become standardised and widely distributed, companies using identical or near-identical AI platforms will produce increasingly similar products and services. Consider their example of multiple startups developing AI-powered digital mental health therapists: all building on comparable AI platforms, all producing therapeutically similar systems, all competing on factors beyond the underlying technology itself.

This homogenisation effect has profound strategic implications. It means that competitive advantage cannot reside in the technology itself but must instead emerge from what the authors term residual heterogeneity-the ability to create something unique that extends beyond what is universally accessible.

Challenging the Myths of Sustainable AI Advantage

Capital and Hardware Access

One common belief holds that companies with superior access to capital and computing infrastructure can sustain AI advantages. Wingate, Burns, and Barney systematically dismantle this assumption. Whilst it is true that organisations with the largest GPU farms can train the most capable models, scaling laws ensure diminishing returns. Recent models like GPT-4 and Gemini represent only marginal improvements over their predecessors despite requiring massive investments in data centres and engineering talent. The cost-benefit curve flattens dramatically at the frontier of capability.

Moreover, the hardware necessary for state-of-the-art AI training is becoming increasingly commoditised. Smaller models with 7 billion parameters now match the performance of yesterday’s 70-billion-parameter systems. This dual pressure-from above (ever-larger models with diminishing returns) and below (increasingly capable smaller models)-ensures that hardware access cannot sustain competitive advantage for long.

Proprietary Data and Algorithmic Innovation

Perhaps the most compelling argument for sustainable AI advantage has centred on proprietary data. Yet even this fortress is crumbling. The authors note that almost all AI models derive their training data from the same open or licensed datasets, producing remarkably similar performance profiles. Synthetic data generation is advancing rapidly, reducing the competitive moat that proprietary datasets once provided. Furthermore, AI models are becoming increasingly generalised-capable of broad competence across diverse tasks and easily adapted to proprietary applications with minimal additional training data.

The implication is stark: merely possessing large quantities of proprietary data will not provide lasting protection. As AI research advances toward greater statistical efficiency, the amount of proprietary data required to adapt general models to specific tasks will continue to diminish.

The Theoretical Foundations: Strategic Management Theory

Resource-Based View and Competitive Advantage

The analytical framework employed by Wingate, Burns, and Barney draws from the resource-based view (RBV) of the firm, a dominant paradigm in strategic management theory. Developed primarily by scholars including Jay Barney himself (one of the article’s authors), the RBV posits that sustainable competitive advantage derives from resources that are valuable, rare, difficult to imitate, and non-substitutable.

This theoretical tradition has proven remarkably durable precisely because it captures something fundamental about competition: advantages that can be easily replicated cannot persist. The RBV framework has successfully explained why some companies maintain competitive advantages whilst others do not, across industries and time periods. By applying this established theoretical lens to AI, Wingate, Burns, and Barney demonstrate that AI does not represent an exception to these fundamental principles-it exemplifies them.

The Distinction Between Transformative and Differentiating Technologies

A critical insight emerging from their analysis is the distinction between technologies that transform industries and technologies that confer competitive advantage. These are not synonymous. Electricity transformed manufacturing; the internet transformed commerce; semiconductors transformed computing. Yet none of these technologies provided lasting competitive advantage to any single organisation once they became widely adopted. The value they created was real and substantial, but it accrued to the market collectively rather than to individual competitors exclusively.

AI follows this established pattern. Its transformative potential is genuine and profound. It will reshape business processes, redefine skill requirements, unlock new analytical possibilities, and increase productivity across sectors. Yet these benefits will be available to all competitors, not reserved for the few. The strategic challenge for organisations is therefore not to seek advantage in the technology itself but to identify where advantage can still be found in an AI-saturated competitive landscape.

The Concept of Residual Heterogeneity

Beyond Technology: The Human Element

Wingate, Burns, and Barney introduce the concept of residual heterogeneity as the key to understanding where sustainable advantage lies in an AI-dominated future. Residual heterogeneity refers to the ability of a company to create something unique that extends beyond what is accessible to everyone else. It encompasses the distinctly human elements of business: creativity, insight, passion, and strategic vision.

This concept represents a return to first principles in competitive strategy. Before the AI era, before the digital revolution, before the internet, competitive advantage derived from human ingenuity, organisational culture, brand identity, customer relationships, and strategic positioning. The authors argue that these sources of advantage have not been displaced by technology; rather, they have become more important as technology itself becomes commoditised.

Practical Implications for Strategy

The strategic implication is clear: companies should not invest in AI with the expectation that the technology itself will provide lasting differentiation. Instead, they should view AI as a capability enabler-a tool that allows them to execute their distinctive strategy more effectively. The sustainable advantage lies not in having AI but in what the organisation does with AI that others cannot or will not replicate.

This might involve superior customer insight that informs how AI is deployed, distinctive brand positioning that AI helps reinforce, unique organisational culture that attracts talent capable of innovative AI applications, or strategic vision that identifies opportunities others overlook. In each case, the advantage derives from human creativity and strategic acumen, with AI serving as an accelerant rather than the source of differentiation.

Temporary Advantage and Strategic Timing

The Value of Being First

Whilst Wingate, Burns, and Barney emphasise that sustainable advantage cannot derive from AI, they implicitly acknowledge that temporary advantage has real strategic value. Early adopters can gain speed-to-market advantages, compress product development cycles, and accumulate learning curve advantages before competitors catch up. In fast-moving markets, a year or two of advantage can be decisive-sufficient to capture market share, build brand equity, establish customer switching costs, and create momentum that persists even after competitive parity is achieved.

The authors employ a surfing metaphor that captures this dynamic perfectly: every competitor can rent the same surfboard, but only a few will catch the first big wave. That wave may not last forever, but riding it well can carry a company far ahead. The temporary advantage is real; it is simply not sustainable in the long term.

Implications for Business Strategy and Innovation

Reorienting Strategic Thinking

The Wingate, Burns, and Barney framework calls for a fundamental reorientation of how organisations think about AI strategy. Rather than viewing AI as a source of competitive advantage, organisations should view it as a necessary capability-a baseline requirement for competitive participation. The strategic question is not “How can we use AI to gain advantage?” but rather “How can we use AI to execute our distinctive strategy more effectively than competitors?”

This reorientation has profound implications for resource allocation, talent acquisition, and strategic positioning. It suggests that organisations should invest in AI capabilities whilst simultaneously investing in the human creativity, strategic insight, and organisational culture that will ultimately determine competitive success. The technology is necessary but not sufficient.

The Enduring Importance of Human Creativity

Perhaps the most important implication of the authors’ analysis is the reassertion of human creativity as the ultimate source of competitive advantage. In an era of technological hype, it is easy to assume that machines will increasingly determine competitive outcomes. The Wingate, Burns, and Barney analysis suggests otherwise: as technology becomes commoditised, the distinctly human capacities for creativity, insight, and strategic vision become more valuable, not less.

This conclusion aligns with broader trends in strategic management theory, which have increasingly emphasised the importance of organisational culture, human capital, and strategic leadership. Technology amplifies these human capabilities; it does not replace them. The organisations that will thrive in an AI-saturated competitive landscape will be those that combine technological sophistication with distinctive human insight and creativity.

Conclusion: A Sobering Realism

Wingate, Burns, and Barney’s assertion that every serious technical advance ultimately becomes equally accessible represents a sobering but realistic assessment of competitive dynamics in the AI era. It challenges the prevailing narrative that early AI adoption will confer lasting competitive advantage. Instead, it suggests that organisations should approach AI with clear-eyed realism: as a transformative technology that will reshape industries and lift competitive baselines, but not as a source of sustainable differentiation.

The strategic imperative is therefore to invest in AI capabilities whilst simultaneously cultivating the human creativity, organisational culture, and strategic insight that will ultimately determine competitive success. The technology is essential; the human element is decisive. In this sense, the AI revolution represents not a departure from established principles of competitive advantage but a reaffirmation of them: lasting advantage derives from what is distinctive, difficult to imitate, and rooted in human creativity-not from technology that is inherently copyable and universally accessible.

References

1. https://www.sensenet.com/en/blog/posts/why-ai-can-provide-competitive-advantage

2. https://sloanreview.mit.edu/article/why-ai-will-not-provide-sustainable-competitive-advantage/

3. https://grtshw.substack.com/p/beyond-ai-human-insight-as-the-advantage

4. https://informedi.org/2025/05/16/why-ai-will-not-provide-sustainable-competitive-advantage/

5. https://shop.sloanreview.mit.edu/why-ai-will-not-provide-sustainable-competitive-advantage

"It is tempting for a company to believe that it will somehow benefit from AI while others will not, but history teaches a different lesson: Every serious technical advance ultimately becomes equally accessible to every company." - Quote: Wingate, et al

read more
Quote: Andrew Ng – AI guru, Coursera founder

Quote: Andrew Ng – AI guru, Coursera founder

“Someone that knows how to use AI will replace someone that doesn’t, even if AI itself won’t replace a person. So getting through the hype to give people the skills they need is critical.” – Andrew Ng – AI guru, Coursera founder

The distinction Andrew Ng draws between AI replacing jobs and AI-capable workers replacing their peers represents a fundamental reorientation in how we should understand technological disruption. Rather than framing artificial intelligence as an existential threat to employment, Ng’s observation-articulated at the World Economic Forum in January 2026-points to a more granular reality: the competitive advantage lies not in the technology itself, but in human mastery of it.

The Context of the Statement

Ng made these remarks during a period of intense speculation about AI’s labour market impact. Throughout 2025 and into early 2026, technology companies announced significant workforce reductions, and public discourse oscillated between utopian and apocalyptic narratives about automation. Yet Ng’s position, grounded in his extensive experience building AI systems and training professionals, cuts through this polarisation with empirical observation.

Speaking at Davos on 19 January 2026, Ng emphasised that “for many jobs, AI can only do 30-40 per cent of the work now and for the foreseeable future.” This technical reality underpins his broader argument: the challenge is not mass technological unemployment, but rather a widening productivity gap between those who develop AI competency and those who do not. The implication is stark-in a world where AI augments rather than replaces human labour, the person wielding these tools becomes exponentially more valuable than the person without them.

Understanding the Talent Shortage

The urgency behind Ng’s call for skills development is rooted in concrete market dynamics. According to research cited by Ng, demand for AI skills has grown approximately 21 per cent annually since 2019. More dramatically, AI jumped from the 6th most scarce technology skill globally to the 1st in just 18 months. Fifty-one per cent of technology leaders report struggling to find candidates with adequate AI capabilities.

This shortage exists not because AI expertise is inherently rare, but because structured pathways to acquiring it remain underdeveloped. Ng has observed developers reinventing foundational techniques-such as retrieval-augmented generation (RAG) document chunking or agentic AI evaluation methods-that already exist in the literature. These individuals expend weeks on problems that could be solved in days with proper foundational knowledge. The inefficiency is not a failure of intelligence but of education.

The Architecture of Ng’s Approach

Ng’s prescription comprises three interconnected elements: structured learning, practical application, and engagement with research literature. Each addresses a specific gap in how professionals currently approach AI development.

Structured learning provides the conceptual scaffolding necessary to avoid reinventing existing solutions. Ng argues that taking relevant courses-whether through Coursera, his own DeepLearning.AI platform, or other institutions-establishes a foundation in proven approaches and common pitfalls. This is not about shortcuts; rather, it is about building mental models that allow practitioners to make informed decisions about when to adopt existing solutions and when innovation is genuinely warranted.

Hands-on practice translates theory into capability. Ng uses the analogy of aviation: studying aerodynamics for years does not make one a pilot. Similarly, understanding AI principles requires experimentation with actual systems. Modern AI tools and frameworks lower the barrier to entry, allowing practitioners to build projects without starting from scratch. The combination of coursework and building creates a feedback loop where gaps in understanding become apparent through practical challenges.

Engagement with research provides early signals about emerging standards and techniques. Reading academic papers is demanding and less immediately gratifying than building applications, yet it offers a competitive advantage by exposing practitioners to innovations before they become mainstream.

The Broader Theoretical Context

Ng’s perspective aligns with and extends classical economic theories of technological adoption and labour market dynamics. The concept of “skill-biased technological change”-the idea that new technologies increase the relative demand for skilled workers-has been central to labour economics since the 1990s. Economists including David Autor and Frank Levy have documented how computerisation did not eliminate jobs wholesale but rather restructured labour markets, creating premium opportunities for those who could work effectively with new tools whilst displacing those who could not.

What distinguishes Ng’s analysis is its specificity to AI and its emphasis on the speed of adaptation required. Previous technological transitions-from mechanisation to computerisation-unfolded over decades, allowing gradual workforce adjustment. AI adoption is compressing this timeline significantly. The productivity gap Ng identifies is not merely a temporary friction but a structural feature of labour markets in the near term, creating urgent incentives for rapid upskilling.

Ng’s work also reflects insights from organisational learning theory, particularly the distinction between individual capability and organisational capacity. Companies can acquire AI tools readily; what remains scarce is the human expertise to deploy them effectively. This scarcity is not permanent-it reflects a lag between technological availability and educational infrastructure-but it creates a window of opportunity for those who invest in capability development now.

The Nuance on Job Displacement

Importantly, Ng does not claim that AI poses no labour market risks. He acknowledges that certain roles-contact centre positions, translation work, voice acting-face sharper disruption because AI can perform a higher percentage of the requisite tasks. However, he contextualises these as minority cases rather than harbingers of economy-wide displacement.

His framing rejects both technological determinism and complacency. AI will not automatically eliminate most jobs, but neither will workers remain unaffected if they fail to adapt. The outcome depends on human agency: specifically, on whether individuals and institutions invest in building the skills necessary to work alongside AI systems.

Implications for Professional Development

The practical consequence of Ng’s analysis is straightforward: professional development in AI is no longer optional for knowledge workers. The competitive dynamic he describes-where AI-capable workers become more productive and thus more valuable-creates a self-reinforcing cycle. Early adopters of AI skills gain productivity advantages, which translate into career advancement and higher compensation, which in turn incentivises further investment in capability development.

This dynamic also has implications for organisational strategy. Companies that invest in systematic training programmes for their workforce-ensuring broad-based AI literacy rather than concentrating expertise in specialist teams-position themselves to capture productivity gains more rapidly and broadly than competitors relying on external hiring alone.

The Hype-Reality Gap

Ng’s emphasis on “getting through the hype” addresses a specific problem in contemporary AI discourse. Public narratives about AI tend toward extremes: either utopian visions of abundance or dystopian scenarios of mass unemployment. Both narratives, in Ng’s view, obscure the practical reality that AI is a tool requiring human expertise to deploy effectively.

The hype creates two problems. First, it generates unrealistic expectations about what AI can accomplish autonomously, leading organisations to underinvest in the human expertise necessary to realise AI’s potential. Second, it creates anxiety that discourages people from engaging with AI development, paradoxically worsening the talent shortage Ng identifies.

By reframing the challenge as fundamentally one of skills and adaptation rather than technological inevitability, Ng provides both a more accurate assessment and a more actionable roadmap. The future is not predetermined by AI’s capabilities; it will be shaped by how quickly and effectively humans develop the competencies to work with these systems.

References

1. https://www.finalroundai.com/blog/andrew-ng-ai-tips-2026

2. https://www.moneycontrol.com/artificial-intelligence/davos-2026-andrew-ng-says-ai-driven-job-losses-have-been-overstated-article-13779267.html

3. https://www.storyboard18.com/brand-makers/davos-2026-andrew-ng-says-fears-of-ai-driven-job-losses-are-exaggerated-87874.htm

4. https://m.umu.com/ask/a11122301573853762262

"Someone that knows how to use AI will replace someone that doesn't, even if AI itself won't replace a person. So getting through the hype to give people the skills they need is critical." - Quote: Andrew Ng - AI guru. Coursera founder

read more
Term: Jevons paradox

Term: Jevons paradox

“Jevons paradox is an economic theory that states that as technological efficiency in using a resource increases, the total consumption of that resource also increases, rather than decreasing. Efficiency gains make the resource cheaper and more accessible, which in turn stimulates higher demand and new uses.” – Jevons paradox

Definition

The Jevons paradox is an economic theory stating that as technological efficiency in using a resource increases, the total consumption of that resource also increases rather than decreasing. Efficiency gains make the resource cheaper and more accessible, which stimulates higher demand and enables new uses, ultimately offsetting the conservation benefits of the initial efficiency improvement.

Core Mechanism: The Rebound Effect

The paradox operates through what economists call the rebound effect. When efficiency improvements reduce the cost of using a resource, consumers and businesses find it more economically attractive to use that resource more intensively. This increased affordability creates a feedback loop: lower costs lead to expanded consumption, which can completely negate or exceed the original efficiency gains.

The rebound effect exists on a spectrum. A rebound effect between 0 and 100 percent-known as “take-back”-means actual consumption is reduced but not as much as expected. However, when the rebound effect exceeds 100 percent, the Jevons paradox applies: efficiency gains cause overall consumption to increase absolutely.

Historical Origins and William Stanley Jevons

The paradox is named after William Stanley Jevons (1835-1882), an English economist and logician who first identified this phenomenon in 1865. Jevons observed that as steam engine efficiency improved throughout the Industrial Revolution, Britain’s total coal consumption increased rather than decreased. He recognised that more efficient steam engines made coal cheaper to use-both directly and indirectly, since more efficient engines could pump water from coal mines more economically-yet simultaneously made coal more valuable by enabling profitable new applications.

Jevons’ insight was revolutionary: efficiency improvements paradoxically expanded the scale of coal extraction and consumption. As coal became cheaper, incomes rose across the coal-fired industrial economy, and profits were continuously reinvested to expand production further. This dynamic became the engine of industrial capitalism’s growth.

Contemporary Examples

Energy and Lighting: Modern LED bulbs consume far less electricity than incandescent bulbs, yet overall lighting energy consumption has not decreased significantly. The reduced cost per light unit has prompted widespread installation of additional lights-in homes, outdoor spaces, and seasonal displays-extending usage hours and offsetting efficiency gains.

Transportation: Vehicles have become substantially more fuel-efficient, yet total fuel consumption continues to rise. When driving becomes cheaper, consumers afford to drive faster, further, or more frequently than before. A 5 percent fuel efficiency gain might reduce consumption by only 2 percent, with the missing 3 percent attributable to increased driving behaviour.

Systemic Scale: Research from 2007 suggested the Jevons paradox likely exists across 18 European countries and applies not merely to isolated sectors but to entire economies. As efficiency improvements reduce production costs across multiple industries, economic growth accelerates, driving increased extraction and consumption of natural resources overall.

Factors Influencing the Rebound Effect

The magnitude of the rebound effect varies significantly based on market maturity and income levels. In developed countries with already-high resource consumption, efficiency improvements produce weaker rebound effects because consumers and businesses have less capacity to increase usage further. Conversely, in developing economies or emerging markets, the same efficiency gains may trigger stronger rebound effects as newly affordable resources enable expanded consumption patterns.

Income also influences the effect: higher-income populations exhibit weaker rebound effects because they already consume resources at near-saturation levels, whereas lower-income populations may dramatically increase consumption when efficiency makes resources more affordable.

The Paradox Beyond Energy

The Jevons paradox extends beyond energy and resources. The principle applies wherever efficiency improvements reduce costs and expand accessibility. Disease control advances, for instance, have enabled humans and livestock to live at higher densities, eventually creating conditions for more severe outbreaks. Similarly, technological progress in production systems-including those powering the gig economy-achieves higher operational efficiency, making exploitation of natural inputs cheaper and more manageable, yet paradoxically increasing total resource demand.

Implications for Sustainability

The Jevons paradox presents a fundamental challenge to conventional sustainability strategies that rely primarily on technological efficiency improvements. Whilst efficiency gains lower costs and enhance output, they simultaneously increase demand and overall resource consumption, potentially increasing pollution and environmental degradation rather than reducing it.

Addressing the paradox requires systemic approaches beyond efficiency alone. These include transitioning towards circular economies, promoting sharing and collaborative consumption models, implementing legal limits on resource extraction, and purposefully constraining economic scale. Some theorists argue that setting deliberate limits on resource use-rather than pursuing ever-greater efficiency-may be necessary to achieve genuine sustainability. As one perspective suggests: “Efficiency makes growth. But limits make creativity.”

Contemporary Relevance

In the 21st century, as environmental pressures intensify and macroeconomic conditions suggest accelerating expansion rates, the Jevons paradox has become increasingly pronounced and consequential. The principle now applies to emerging technologies including artificial intelligence, where computational efficiency improvements may paradoxically increase overall energy demand and resource consumption as new applications become economically viable.

References

1. https://www.greenchoices.org/news/blog-posts/the-jevons-paradox-when-efficiency-leads-to-increased-consumption

2. https://www.resilience.org/stories/2020-06-17/jevons-paradox/

3. https://www.youtube.com/watch?v=MTfwhbfMnNc

4. https://lpcentre.com/articles/jevons-paradox-rethinking-sustainability

5. https://news.northeastern.edu/2025/02/07/jevons-paradox-ai-future/

6. https://adgefficiency.com/blog/jevons-paradox/

"Jevons paradox is an economic theory that states that as technological efficiency in using a resource increases, the total consumption of that resource also increases, rather than decreasing. Efficiency gains make the resource cheaper and more accessible, which in turn stimulates higher demand and new uses." - Term: Jevons paradox

read more
Quote: Fei-Fei Li – Godmother of AI

Quote: Fei-Fei Li – Godmother of AI

“Fearless is to be free. It’s to get rid of the shackles that constrain your creativity, your courage, and your ability to just get s*t done.” – Fei-Fei Li – Godmother of AI

Context of the Quote

This powerful statement captures Fei-Fei Li’s philosophy on perseverance in research and innovation, particularly within artificial intelligence (AI). Spoken in a discussion on enduring hardship, Li emphasises how fearlessness liberates the mind in the realm of imagination and hypothesis-driven work. Unlike facing uncontrollable forces like nature, intellectual pursuits allow one to push boundaries without fatal constraints, fostering curiosity and bold experimentation1. The quote underscores her belief that true freedom in science comes from shedding self-imposed limitations to drive progress.

Backstory of Fei-Fei Li

Fei-Fei Li, often hailed as the ‘Godmother of AI’, is the inaugural Sequoia Professor of Computer Science at Stanford University and a founding co-director of the Stanford Institute for Human-Centered Artificial Intelligence. Her journey began in Chengdu, China, where she was born into a family disrupted by the Cultural Revolution. Her mother, an academic whose dreams were crushed by political turmoil, instilled rebellion and resilience. At 16, Li’s brave parents uprooted the family, leaving everything behind for America to offer their daughter better opportunities-far from ‘tiger parenting’, they encouraged independence amid poverty and cultural adjustment in New Jersey2.

Li excelled despite challenges, initially drawn to physics for its audacious questions, a passion honed at Princeton University. There, she learned to ask bold queries of nature, a mindset that pivoted her to AI. Her breakthrough came with ImageNet, a vast visual database that revived computer vision and catalysed deep learning revolutions, enabling systems to recognise images like humans. Today, she champions ‘human-centred AI’, stressing that people create, use, and must shape AI’s societal impact4,5. Li seeks ‘intellectual fearlessness’ in collaborators-the courage to tackle hard problems fully6.

Leading Theorists in AI and Fearlessness

Li’s ideas echo foundational AI thinkers who embodied fearless innovation:

  • Alan Turing: The father of theoretical computer science and AI, Turing proposed the ‘Turing Test’ in 1950, boldly envisioning machines mimicking human intelligence despite post-war skepticism. His universal machine concept laid AI’s computational groundwork.
  • John McCarthy: Coined ‘artificial intelligence’ in 1956 at the Dartmouth Conference, igniting the field. Fearlessly, he pioneered Lisp programming and time-sharing systems, pushing practical AI amid funding winters.
  • Marvin Minsky: MIT’s AI pioneer co-founded the field at Dartmouth. His ‘Society of Mind’ theory posited intelligence as emergent from simple agents, challenging monolithic brain models with audacious simplicity.
  • Geoffrey Hinton: The ‘Godfather of Deep Learning’, Hinton persisted through AI winters, proving neural networks viable. His backpropagation work and AlexNet contributions (built on Li’s ImageNet) revived the field1.
  • Yann LeCun & Yoshua Bengio: With Hinton, these ‘Godfathers of AI’ advanced convolutional networks and sequence learning, fearlessly advocating deep learning when dismissed as implausible.

Li builds on these legacies, shifting focus to ethical, human-augmented AI. She critiques ‘single genius’ histories, crediting collaborative bravery-like her parents’ and Princeton’s influence1,4. In the AI age, her call to fearlessness urges scientists and entrepreneurs to embrace uncertainty for humanity’s benefit3.

References

1. https://www.youtube.com/watch?v=KhnNgQoEY14

2. https://www.youtube.com/watch?v=z1g1kkA1M-8

3. https://mastersofscale.com/episode/how-to-be-fearless-in-the-ai-age/

4. https://tim.blog/2025/12/09/dr-fei-fei-li-the-godmother-of-ai/

5. https://www.youtube.com/watch?v=Ctjiatnd6Xk

6. https://www.youtube.com/shorts/hsHbSkpOu2A

7. https://www.youtube.com/shorts/qGLJeJ1xwLI

"Fearless is to be free. It’s to get rid of the shackles that constrain your creativity, your courage, and your ability to just get s*t done." - Quote: Fei-Fei Li

read more
Term: Out-of-the-money option

Term: Out-of-the-money option

“An out-of-the-money (OTM) option is an option contract that has no intrinsic value, meaning exercising it immediately would result in a loss, making it currently unprofitable but potentially profitable if the underlying asset’s price moves favorably before expiration.” – Out-of-the-money option

An out-of-the-money (OTM) option is an options contract that has no intrinsic value at the current underlying price. Exercising it immediately would generate no economic gain and, after transaction costs, would imply a loss, although the option may still be valuable because of the possibility that the underlying price moves favourably before expiry.1,3,5,6,7

Formal definition and moneyness

The moneyness of an option describes the relationship between the option’s strike price and the current spot price of the underlying asset. An option can be:

  • In the money (ITM) – positive intrinsic value.
  • At the money (ATM) – spot price approximately equal to strike.
  • Out of the money (OTM) – zero intrinsic value.1,3,4,5,6

For a single underlying with spot price S and strike price K:

  • A call option is OTM when S < K. Exercising would mean buying at K when the market lets you buy at S < K, so there is no gain.1,3,4,5,6,7
  • A put option is OTM when S > K. Exercising would mean selling at K when the market lets you sell at S > K, again implying no gain.1,3,4,5,6,7

The intrinsic value of standard European options is defined as:

  • Call intrinsic value: \max(S - K, 0).
  • Put intrinsic value: \max(K - S, 0).

An option is therefore OTM exactly when its intrinsic value equals 0.3,4,5,6

Intrinsic value vs time value

Even though an OTM option has no intrinsic value, it typically still has a positive premium. This premium is then made up entirely of time value (also called extrinsic value):3,5,6

  • Intrinsic value – immediate exercise value, which is 0 for an OTM option.
  • Time value – value arising from the probability that the option might become ITM before expiry.

Thus for an OTM option, the option price C (for a call) or P (for a put) satisfies:

  • C = \text{time value} when S < K.
  • P = \text{time value} when S > K.6

Examples of out-of-the-money options

  • OTM call: A stock trades at 30. A call option has strike 40. Buying via the option at 40 would be worse than buying directly at 30, so the call is OTM. Its intrinsic value is \max(30 - 40, 0) = 0.2,3,4
  • OTM put: The same stock trades at 30. A put has strike 20. Selling via the option at 20 would be worse than selling in the market at 30, so the put is OTM. Its intrinsic value is \max(20 - 30, 0) = 0.3,4,5

OTM options at and after expiry

At expiry a standard listed option that is out of the money expires worthless. For the buyer this means:

  • They lose the entire premium originally paid.2,3,5

For the seller (writer):

  • An OTM expiry is a favourable outcome – the option expires with no intrinsic value and the writer keeps the premium as profit.2,5

Why OTM options still have value

Despite having no intrinsic value, OTM options are often actively traded because:

  • They are cheaper than at-the-money or in-the-money options, so they provide high leverage to movements in the underlying.2,3,5
  • They embed a non-linear payoff that becomes valuable if the underlying makes a large move in the right direction before expiry.
  • Their price reflects implied volatility, time to maturity and interest rates, all of which influence the probability of finishing in the money.

This makes OTM options attractive for speculative strategies seeking large percentage returns, as well as for hedging tail risks (for example, buying deep OTM puts as crash insurance). However, they have a higher probability of expiring worthless, so most OTM options do not end up being exercised.2,3,5

OTM options in European option valuation

For European-style options – exercisable only at expiry – the value of an OTM option is purely the discounted expected payoff under a risk-neutral measure. In continuous-time models such as Black – Scholes – Merton, even a deeply OTM option has a strictly positive value whenever the time to expiry and volatility are non-zero, because there is always some probability, however small, that the option will finish in the money.

In the Black – Scholes – Merton model, the price of a European call option on a non-dividend-paying stock is

C = S\,N(d_1) - K e^{-rT} N(d_2)

and for a European put option

P = K e^{-rT} N(-d_2) - S\,N(-d_1)

where N(\cdot) is the standard normal cumulative distribution, r is the risk-free rate, T is time to maturity, and d_1, d_2 depend on S, K, r, T and volatility \sigma. For OTM options, these formulas yield a positive price driven entirely by time value.

Strategic uses of OTM options

OTM options are integral to many derivatives strategies, for example:

  • Speculative directional bets: Buying OTM calls to express a bullish view or OTM puts for a bearish view, targeting high percentage gains if the underlying moves sharply.
  • Income strategies: Writing OTM calls (covered calls) to earn premium while capping upside beyond the strike; or writing OTM puts to potentially acquire the underlying at an effective discounted price if assigned.
  • Hedging and risk management: Buying OTM puts as portfolio insurance against severe market declines, or constructing option spreads (for example, bull call spreads, bear put spreads) with OTM legs to shape payoff profiles cost-effectively.
  • Volatility and tail-risk trades: OTM options are particularly sensitive to changes in implied volatility, making them useful in volatility trading and in expressing views on extreme events.

Key risks and considerations

  • High probability of expiry worthless: Because the underlying must move sufficiently for the option to become ITM before or at expiry, many OTM options never pay off.2,3,5
  • Time decay (theta): As expiry approaches, the time value of an OTM option erodes, often rapidly, if the expected move does not materialise.
  • Liquidity and bid-ask spreads: Deep OTM options can suffer from wider spreads and lower liquidity, increasing transaction costs.
  • Leverage risk: Although the premium is small, the percentage loss can be 100 percent, and repeated speculative use without risk control can be hazardous.

Best related strategy theorists: Fischer Black, Myron Scholes and Robert C. Merton

The concept of an OTM option is fundamental to options pricing theory, and its modern analytical treatment is inseparable from the work of Fischer Black, Myron Scholes and Robert C. Merton, who together developed the Black – Scholes – Merton (BSM) model for pricing European options.

Fischer Black (1938 – 1995)

Fischer Black was an American economist and partner at Goldman Sachs. Trained originally in physics, he brought a quantitative, model-driven perspective to finance. In 1973 he co-authored the seminal paper “The Pricing of Options and Corporate Liabilities” with Myron Scholes, introducing the continuous-time model that now bears their names.

Black’s work is central to understanding OTM options because the BSM framework shows precisely how time to expiry, volatility and interest rates generate strictly positive values for options with zero intrinsic value. Within this model, the value of an OTM option is the discounted expected payoff under a lognormal distribution for the underlying asset price. The pricing formulas make clear that an OTM option’s value is highly sensitive to volatility and time – a key insight for both hedging and speculative use of OTM contracts.

Myron Scholes (b. 1941)

Myron Scholes is a Canadian-born American economist and Nobel laureate. After academic posts at institutions such as MIT and Stanford, he became widely known for his role in developing modern options pricing theory. Scholes shared the 1997 Nobel Prize in Economic Sciences with Robert Merton for their method of determining the value of derivatives.

Scholes’s contribution to the understanding of OTM options lies in demonstrating, together with Black, that one can construct a dynamically hedged portfolio of the underlying asset and a risk-free bond that replicates the option’s payoff. This replication argument gives rise to the risk-neutral valuation framework in which the fair value of even a deeply OTM option is derived from the probability-weighted payoffs under a no-arbitrage condition. Under this framework, the distinction between ITM, ATM and OTM options is naturally captured by their different sensitivities (“Greeks”) to underlying price and volatility.

Robert C. Merton (b. 1944)

Robert C. Merton, an American economist and Nobel laureate, independently developed a continuous-time model for pricing options and general contingent claims around the same time as Black and Scholes. His 1973 paper “Theory of Rational Option Pricing” extended and generalised the framework, placing it within a broader stochastic calculus and intertemporal asset pricing context.

Merton’s work deepened the theoretical foundations underlying OTM option valuation. He formalised the idea that options are contingent claims and showed how their value can be derived from the underlying asset’s dynamics and market conditions. For OTM options in particular, Merton’s extensions clarified how factors such as dividends, stochastic interest rates and more complex payoff structures affect the time value and hence the price, even when intrinsic value is zero.

Relationship between their theory and out-of-the-money options

Together, Black, Scholes and Merton transformed the treatment of OTM options from a qualitative notion – “currently unprofitable to exercise” – into a rigorously quantified object embedded in a complete market model. Their work explains:

  • Why an OTM option commands a positive price despite zero intrinsic value.
  • How that price should depend on volatility, time to expiry, interest rates and underlying price level.
  • How traders can hedge OTM options dynamically using the underlying asset (delta hedging).
  • How to compare and structure strategies involving multiple OTM options, such as spreads and strangles, using model-implied values and Greeks.

While many other theorists have extended option pricing and trading strategy – including researchers in stochastic volatility, jumps and behavioural finance – the work of Black, Scholes and Merton remains the core reference point for understanding, valuing and deploying out-of-the-money options in both academic theory and practical derivatives markets.

References

1. https://www.ig.com/en/glossary-trading-terms/out-of-the-money-definition

2. https://www.icicidirect.com/ilearn/futures-and-options/articles/what-is-out-of-the-money-or-otm-in-options

3. https://www.sofi.com/learn/content/in-the-money-vs-out-of-the-money/

4. https://smartasset.com/investing/in-the-money-vs-out-of-the-money

5. https://www.avatrade.com/education/market-terms/what-is-otm

6. https://www.interactivebrokers.com/campus/glossary-terms/out-of-the-money/

7. https://www.fidelity.com/learning-center/smart-money/what-are-options

"An out-of-the-money (OTM) option is an option contract that has no intrinsic value, meaning exercising it immediately would result in a loss, making it currently unprofitable but potentially profitable if the underlying asset's price moves favorably before expiration." - Term: Out-of-the-money option

read more
Quote: Fei-Fei Li – Godmother of AI

Quote: Fei-Fei Li – Godmother of AI

“In the AI age, trust cannot be outsourced to machines. Trust is fundamentally human. It’s at the individual level, community level, and societal level.” – Fei-Fei Li – Godmother of AI

The Quote and Its Significance

This statement encapsulates a profound philosophical stance on artificial intelligence that challenges the prevailing techno-optimism of our era. Rather than viewing AI as a solution to human problems-including the problem of trust itself-Fei-Fei Li argues for the irreducible human dimension of trust. In an age where algorithms increasingly mediate our decisions, relationships, and institutions, her words serve as a clarion call: trust remains fundamentally a human endeavour, one that cannot be delegated to machines, regardless of their sophistication.

Who Is Fei-Fei Li?

Fei-Fei Li stands as one of the most influential voices in artificial intelligence research and ethics today. As co-director of Stanford’s Institute for Human-Centered Artificial Intelligence (HAI), founded in 2019, she has dedicated her career to ensuring that AI development serves humanity rather than diminishes it. Her influence extends far beyond academia: she was appointed to the United Nations Scientific Advisory Board, named one of TIME’s 100 Most Influential People in AI, and has held leadership roles at Google Cloud and Twitter.

Li’s most celebrated contribution to AI research is the creation of ImageNet, a monumental dataset that catalysed the deep learning revolution. This achievement alone would secure her place in technological history, yet her impact extends into the ethical and philosophical dimensions of AI development. In 2024, she co-founded World Labs, an AI startup focused on spatial intelligence systems designed to augment human capability-a venture that raised $230 million and exemplifies her commitment to innovation grounded in ethical principles.

Beyond her technical credentials, Li co-founded AI4ALL, a non-profit organisation dedicated to promoting diversity and inclusion in the AI sector, reflecting her conviction that AI’s future must be shaped by diverse voices and perspectives.

The Core Philosophy: Human-Centred AI

Li’s assertion about trust emerges from a broader philosophical framework that she terms human-centred artificial intelligence. This approach fundamentally rejects the notion that machines should replace human judgment, particularly in domains where human dignity, autonomy, and values are at stake.

In her public statements, Li has articulated a concern that resonates throughout her work: the language we use about AI shapes how we develop and deploy it. She has expressed deep discomfort with the word “replace” when discussing AI’s relationship to human labour and capability. Instead, she advocates for framing AI as augmenting or enhancing human abilities rather than supplanting them. This linguistic shift reflects a philosophical commitment: AI should amplify human creativity and ingenuity, not reduce humans to mere task-performers.

Her reasoning is both biological and existential. As she has explained, humans are slower runners, weaker lifters, and less capable calculators than machines-yet “we are so much more than those narrow tasks.” To allow AI to define human value solely through metrics of speed, strength, or computational power is to fundamentally misunderstand what makes us human. Dignity, creativity, moral judgment, and relational capacity cannot be outsourced to algorithms.

The Trust Question in Context

Li’s statement about trust addresses a critical vulnerability in contemporary society. As AI systems increasingly mediate consequential decisions-from healthcare diagnoses to criminal sentencing, from hiring decisions to financial lending-society faces a temptation to treat these systems as neutral arbiters. The appeal is understandable: machines do not harbour conscious bias, do not tire, and can process vast datasets instantaneously.

Yet Li’s insight cuts to the heart of a fundamental misconception. Trust, in her formulation, is not merely a technical problem to be solved through better algorithms or more transparent systems. Trust is a social and moral phenomenon that exists at three irreducible levels:

  • Individual level: The personal relationships and judgments we make about whether to rely on another person or institution
  • Community level: The shared norms and reciprocal commitments that bind groups together
  • Societal level: The institutional frameworks and collective agreements that enable large-scale cooperation

Each of these levels involves human agency, accountability, and the capacity to be wronged. A machine cannot be held morally responsible; a human can. A machine cannot understand the context of a community’s values; a human can. A machine cannot participate in the democratic deliberation necessary to shape societal institutions; a human must.

Leading Theorists and Related Intellectual Traditions

Li’s thinking draws upon and contributes to several important intellectual traditions in philosophy, ethics, and social theory:

Human Dignity and Kantian Ethics

At the philosophical foundation of Li’s work lies a commitment to human dignity-the idea that humans possess intrinsic worth that cannot be reduced to instrumental value. This echoes Immanuel Kant’s categorical imperative: humans must never be treated merely as means to an end, but always also as ends in themselves. When AI systems reduce human workers to optimisable tasks, or when algorithmic systems treat individuals as data points rather than moral agents, they violate this fundamental principle. Li’s insistence that “if AI applications take away that sense of dignity, there’s something wrong” is fundamentally Kantian in its ethical architecture.

Feminist Technology Studies and Care Ethics

Li’s emphasis on relationships, context, and the irreducibility of human judgment aligns with feminist critiques of technology that emphasise care, interdependence, and situated knowledge. Scholars in this tradition-including Donna Haraway, Lucy Suchman, and Safiya Noble-have long argued that technology is never neutral and that the pretence of objectivity often masks particular power relations. Li’s work similarly insists that AI development must be grounded in explicit values and ethical commitments rather than presented as value-neutral problem-solving.

Social Epistemology and Trust

The philosophical study of trust has been enriched in recent decades by work in social epistemology-the study of how knowledge is produced and validated collectively. Philosophers such as Miranda Fricker have examined how trust is distributed unequally across society, and how epistemic injustice occurs when certain voices are systematically discredited. Li’s emphasis on trust at the community and societal levels reflects this sophisticated understanding: trust is not a technical property but a social achievement that depends on fair representation, accountability, and recognition of diverse forms of knowledge.

The Ethics of Artificial Intelligence

Li contributes to and helps shape the emerging field of AI ethics, which includes thinkers such as Stuart Russell, Timnit Gebru, and Kate Crawford. These scholars have collectively argued that AI development cannot be separated from questions of power, justice, and human flourishing. Russell’s work on value alignment-ensuring that AI systems pursue goals aligned with human values-provides a technical framework for the philosophical commitments Li articulates. Gebru and Crawford’s work on data justice and algorithmic bias demonstrates how AI systems can perpetuate and amplify existing inequalities, reinforcing Li’s conviction that human oversight and ethical deliberation remain essential.

The Philosophy of Technology

Li’s thinking also engages with classical philosophy of technology, particularly the work of thinkers like Don Ihde and Peter-Paul Verbeek, who have argued that technologies are never mere tools but rather reshape human practices, relationships, and possibilities. The question is not whether AI will change society-it will-but whether that change will be guided by human values or will instead impose its own logic upon us. Li’s advocacy for light-handed, informed regulation rather than heavy-handed top-down control reflects a nuanced understanding that technology development requires active human governance, not passive acceptance.

The Broader Context: AI’s Transformative Power

Li’s emphasis on trust must be understood against the backdrop of AI’s extraordinary transformative potential. She has stated that she believes “our civilisation stands on the cusp of a technological revolution with the power to reshape life as we know it.” Some experts, including AI researcher Kai-Fu Lee, have argued that AI will change the world more profoundly than electricity itself.

This is not hyperbole. AI systems are already reshaping healthcare, scientific research, education, employment, and governance. Deep neural networks have demonstrated capabilities that surprise even their creators-as exemplified by AlphaGo’s unexpected moves in the ancient game of Go, which violated centuries of human strategic wisdom yet proved devastatingly effective. These systems excel at recognising patterns that humans cannot perceive, at scales and speeds beyond human comprehension.

Yet this very power makes Li’s insistence on human trust more urgent, not less. Precisely because AI is so powerful, precisely because it operates according to logics we cannot fully understand, we cannot afford to outsource trust to it. Instead, we must maintain human oversight, human accountability, and human judgment at every level where AI affects human lives and communities.

The Challenge Ahead

Li frames the challenge before us as fundamentally moral rather than merely technical. Engineers can build more transparent algorithms; ethicists can articulate principles; regulators can establish guardrails. But none of these measures can substitute for the hard work of building trust-at the individual level through honest communication and demonstrated reliability, at the community level through inclusive deliberation and shared commitment to common values, and at the societal level through democratic institutions that remain responsive to human needs and aspirations.

Her vision is neither techno-pessimistic nor naïvely optimistic. She does not counsel fear or rejection of AI. Rather, she advocates for what she calls “very light-handed and informed regulation”-guardrails rather than prohibition, guidance rather than paralysis. But these guardrails must be erected by humans, for humans, in service of human flourishing.

In an era when trust in institutions has eroded-when confidence in higher education, government, and media has declined precipitously-Li’s message carries particular weight. She acknowledges the legitimate concerns about institutional trustworthiness, yet argues that the solution is not to replace human institutions with algorithmic ones, but rather to rebuild human institutions on foundations of genuine accountability, transparency, and commitment to human dignity.

Conclusion: Trust as a Human Responsibility

Fei-Fei Li’s statement that “trust cannot be outsourced to machines” is ultimately a statement about human responsibility. In the age of artificial intelligence, we face a choice: we can attempt to engineer our way out of the messy, difficult work of building and maintaining trust, or we can recognise that trust is precisely the work that remains irreducibly human. Li’s life’s work-from ImageNet to the Stanford HAI Institute to World Labs-represents a sustained commitment to the latter path. She insists that we can harness AI’s extraordinary power whilst preserving what makes us human: our capacity for judgment, our commitment to dignity, and our ability to trust one another.

References

1. https://www.hoover.org/research/rise-machines-john-etchemendy-and-fei-fei-li-our-ai-future

2. https://economictimes.com/magazines/panache/stanford-professor-calls-out-the-narrative-of-ai-replacing-humans-says-if-ai-takes-away-our-dignity-something-is-wrong/articleshow/122577663.cms

3. https://www.nisum.com/nisum-knows/top-10-thought-provoking-quotes-from-experts-that-redefine-the-future-of-ai-technology

4. https://www.goodreads.com/author/quotes/6759438.Fei_Fei_Li

"In the AI age, trust cannot be outsourced to machines. Trust is fundamentally human. It’s at the individual level, community level, and societal level." - Quote: Fei-Fei Li

read more
Term: Barrier option

Term: Barrier option

“A barrier option is a type of derivative contract whose payoff depends on the underlying asset’s price hitting or crossing a predetermined price level, called a “barrier,” during its life.” – Barrier option

A barrier option is an exotic, path-dependent option whose payoff and even validity depend on whether the price of an underlying asset hits, crosses, or breaches a specified barrier level during the life of the contract.1,3,6 In contrast to standard (vanilla) European or American options, which depend only on the underlying price at expiry (and, for Americans, the ability to exercise early), barrier options embed an additional trigger condition linked to the price path of the underlying.3,6

Core definition and mechanics

Formally, a barrier option is a derivative contract that grants the holder a right (but not the obligation) to buy or sell an underlying asset at a pre-agreed strike price if, and only if, a separate barrier level has or has not been breached during the option’s life.1,3,4,6 The barrier can cause the option to:

  • Activate (knock-in) when breached, or
  • Extinguish (knock-out) when breached.1,2,3,4,5

Key characteristics:

  • Exotic option: Barrier options are classified as exotic because they include more complex features than standard European or American options.1,3,6
  • Path dependence: The payoff depends on the entire price path of the underlying – not just the terminal price at maturity.3,6 What matters is whether the barrier was touched at any time before expiry.
  • Conditional payoff: The option’s value or existence is conditional on the barrier event. If the condition is not met, the option may never become active or may cease to exist before expiry.1,2,3,4
  • Over-the-counter (OTC) trading: Barrier options are predominantly customised and traded OTC between institutions, corporates, and sophisticated investors, rather than on standardised exchanges.3

Structural elements

Any barrier option can be described by a small set of structural parameters:

  • Underlying asset: The asset from which value is derived, such as an equity, FX rate, interest rate, commodity, or index.1,3
  • Option type: Call (right to buy) or put (right to sell).3
  • Exercise style: Most barrier options are European-style, exercisable only at expiry. In practice, the barrier monitoring is typically continuous or at defined intervals, even though exercise itself is European.3,6
  • Strike price: The price at which the underlying can be bought or sold if the option is alive at exercise.1,3
  • Barrier level: The critical price of the underlying that, when touched or crossed, either activates or extinguishes the option.1,3,6
  • Barrier direction:
    • Up: Barrier is set above the initial underlying price.
    • Down: Barrier is set below the initial underlying price.3,8
  • Barrier effect:
    • Knock-in: Becomes alive only if the barrier is breached.
    • Knock-out: Ceases to exist if the barrier is breached.1,2,3,4,5
  • Monitoring convention: Continuous monitoring (at all times) or discrete monitoring (at specific dates or times). Continuous monitoring is the canonical case in theory and common in OTC practice.
  • Rebate: An optional fixed (or sometimes functional) payment that may be made if the option is knocked out, compensating the holder partly for the lost optionality.3

Types of barrier options

The main taxonomy combines direction (up/down) with effect (knock-in/knock-out), and applies to either calls or puts.1,2,3,6

1. Knock-in options

Knock-in barrier options are dormant initially and become standard options only if the underlying price crosses the barrier at some point before expiry.1,2,3,4

  • Up-and-in: The option is activated only if the underlying price rises above a barrier set above the initial price.1,2,3
  • Down-and-in: The option is activated only if the underlying price falls below a barrier set below the initial price.1,2,3

Once activated, a knock-in barrier option typically behaves like a vanilla European option with the same strike and expiry. If the barrier is never reached, the knock-in option expires worthless.1,3

2. Knock-out options

Knock-out options are initially alive but are extinguished immediately if the barrier is breached at any time before expiry.1,2,3,4

  • Up-and-out: The option is cancelled if the underlying price rises above a barrier set above the initial price.1,3
  • Down-and-out: The option is cancelled if the underlying price falls below a barrier set below the initial price.1,3

Because the option can disappear before maturity, the premium is typically lower than that of an equivalent vanilla option, all else equal.1,2,3

3. Rebate barrier options

Some barrier structures include a rebate, a pre-specified cash amount that is paid if the barrier condition is (or is not) met. For example, a knock-out option may pay a rebate when it is knocked out, offering partial compensation for the loss of the remaining optionality.3

Path dependence and payoff character

Barrier options are described as path-dependent because their payoff depends on the trajectory of the underlying price over time, not only on its value at expiry.3,6

  • For a knock-in, the central question is: Was the barrier ever touched? If yes, the payoff at expiry is that of the corresponding vanilla option; if not, the payoff is zero (or a rebate if specified).
  • For a knock-out, the question is: Was the barrier ever touched before expiry? If yes, the payoff is zero from that time onwards (again, possibly plus a rebate); if not, the payoff at expiry equals that of a vanilla option.1,3

Because of this path dependence, pricing and hedging barrier options require modelling not just the distribution of the underlying price at maturity, but also the probability of the price path crossing the barrier level at any time before that.3,6

Pricing: connection to Black – Scholes – Merton

The pricing of barrier options, under the classical assumptions of frictionless markets, constant volatility, and lognormal underlying dynamics, is grounded in the Black – Scholes – Merton (BSM) framework. In the BSM world, the underlying price process is often modelled as a geometric Brownian motion:

dS_t = \mu S_t \, dt + \sigma S_t \, dW_t

Under risk-neutral valuation, the drift \mu is replaced by the risk-free rate r, and the barrier option price is the discounted risk-neutral expected payoff. Closed-form expressions are available for many standard barrier structures (e.g. up-and-out or down-and-in calls and puts) under continuous monitoring, building on and extending the vanilla Black – Scholes formula.

The pricing techniques involve:

  • Analytical solutions for simple, continuously monitored barriers with constant parameters, often derived via solution of the associated partial differential equation (PDE) with absorbing or activating boundary conditions at the barrier.
  • Reflection principle methods for Brownian motion, which allow the derivation of hitting probabilities and related terms.
  • Numerical methods (finite differences, Monte Carlo with barrier adjustments, tree methods) for more complex, discretely monitored, or path-dependent variants with time-varying barriers or stochastic volatility.

Relative to vanilla options, barrier options in the BSM model are typically cheaper because the additional condition (activation or extinction) reduces the set of scenarios in which the holder receives the full vanilla payoff.1,2,3

Strategic uses and motives

Barrier options are used across markets where participants either want finely tuned risk protection or to express a conditional view on future price movements.1,2,3,5

1. Cost-efficient hedging

  • Corporates may hedge FX or interest-rate exposures using knock-out or knock-in structures to reduce premiums. For instance, a corporate worried about a sharp depreciation in a currency might buy a down-and-in put that only activates if the exchange rate falls below a critical business threshold, thereby paying less premium than for a plain vanilla put.3
  • Investors may use barrier puts to protect against tail-risk events while accepting no protection for moderate moves, again in exchange for a lower upfront cost.

2. Targeted speculation

  • Barrier options allow traders to express conditional views: for example, that an asset will rally, but only after breaking through a resistance level, or that a decline will occur only if a support level is breached.2,3
  • Up-and-in calls or down-and-in puts are often used to express such conditional breakout scenarios.

3. Structuring and yield enhancement

  • Barrier options are a staple ingredient in structured products offered by banks to clients seeking yield enhancement with contingent downside or upside features.
  • For example, a range accrual, reverse convertible, or autocallable note may incorporate barriers that determine whether coupons are paid or capital is protected.

Risk characteristics

Barrier options introduce specific risks beyond those of standard options:

  • Gap risk and jump risk: If the underlying price jumps across the barrier between monitoring times or overnight, the option may be suddenly knocked in or out, creating discontinuous changes in value and hedging exposure.
  • Model risk: Pricing relies heavily on assumptions about volatility, barrier monitoring, and the nature of price paths. Mis-specification can lead to significant mispricing.
  • Hedging complexity: Because payoff and survival depend on path, the option’s sensitivity (delta, gamma, vega) can change abruptly as the underlying approaches the barrier. This makes hedging more complex and costly compared with vanilla options.
  • Liquidity risk: OTC nature and customisation mean secondary market liquidity is often limited.3

Barrier options and the Black – Scholes – Merton lineage

The natural theoretical anchor for barrier options is the Black – Scholes – Merton framework for option pricing, originally developed for vanilla European options. Although barrier options were not the primary focus of the original 1973 Black – Scholes paper or Merton’s parallel contributions, their pricing logic is an extension of the same continuous-time, arbitrage-free valuation principles.

Among the three names, Robert C. Merton is often most closely associated with the broader theoretical architecture that supports exotic options such as barriers. His work generalised the option pricing model to a much wider class of contingent claims and introduced the dynamic programming and stochastic calculus techniques that underpin modern treatment of path-dependent derivatives.

Related strategy theorist: Robert C. Merton

Biography

Robert C. Merton (born 1944) is an American economist and one of the principal architects of modern financial theory. He completed his undergraduate studies in engineering mathematics and went on to obtain a PhD in economics from MIT. Merton became a professor at MIT Sloan School of Management and later at Harvard Business School, and he is a Nobel laureate in Economic Sciences (1997), an award he shared with Myron Scholes; the prize also recognised the late Fischer Black.

Merton’s academic work profoundly shaped the fields of corporate finance, asset pricing, and risk management. His research ranges from intertemporal portfolio choice and lifecycle finance to credit-risk modelling and the design of financial institutions.

Relationship to barrier options

Barrier options sit within the class of contingent claims whose value is derived and replicated using dynamic trading strategies in the underlying and risk-free asset. Merton’s seminal contributions were crucial in making this viewpoint systematic and rigorous:

  • Generalisation of option pricing: While Black and Scholes initially derived a closed-form formula for European calls on non-dividend-paying stocks, Merton generalised the theory to include dividend-paying assets, different underlying processes, and a broad family of contingent claims. This opened the door to analytical and numerical valuation of exotics such as barrier options within the same risk-neutral, no-arbitrage framework.
  • PDE and boundary-condition approach: Merton formalised the use of partial differential equations to price derivatives, with appropriate boundary conditions representing contract features. Barrier options correspond to problems with absorbing or reflecting boundaries at the barrier levels, making Merton’s PDE methodology a natural tool for their analysis.
  • Dynamic hedging and replication: The concept that an option’s payoff can be replicated by continuous rebalancing of a portfolio of the underlying and cash lies at the heart of both vanilla and exotic option pricing. For barrier options, hedging near the barrier is particularly delicate, and the replicating strategies draw on the same dynamic hedging logic Merton developed and popularised.
  • Credit and structural models: Merton’s structural model of corporate default (treating equity as a call option on the firm’s assets and debt as a combination of riskless and short-position options) highlighted how option-like features permeate financial contracts. Barrier-type features naturally arise in such models, for instance, when default or covenant breaches are triggered by asset values crossing thresholds.

While many researchers have contributed specific closed-form solutions and numerical schemes for barrier options, the overarching conceptual framework – continuous-time stochastic modelling, risk-neutral valuation, PDE methods, and dynamic hedging – is fundamentally rooted in the Black – Scholes – Merton tradition, with Merton’s work providing critical generality and depth.

Merton’s broader influence on derivatives and strategy

Merton’s ideas significantly influenced how practitioners design and use derivatives such as barrier options in strategic contexts:

  • Risk management as engineering: Merton advocated viewing financial innovation as an engineering discipline aimed at tailoring payoffs to the risk profiles and objectives of individuals and institutions. Barrier options exemplify this engineering mindset: they allow exposures to be turned on or off when critical price thresholds are reached.
  • Lifecycle and institutional design: His work on lifecycle finance and pension design uses options and option-like payoffs to shape outcomes over time. Barriers and trigger conditions appear naturally in products that protect wealth only under certain macro or market conditions.
  • Strategic structuring: In corporate and institutional settings, barrier features are used to align hedging and investment strategies with real-world triggers such as regulatory thresholds, solvency ratios, or budget constraints. These applications build directly on the contingent-claims analysis championed by Merton.

In this sense, although barrier options themselves are a specific exotic instrument, their conceptual foundations and strategic uses are deeply connected to Robert C. Merton’s broader contributions to continuous-time finance, option-pricing theory, and the design of financial strategies under uncertainty.

References

1. https://corporatefinanceinstitute.com/resources/derivatives/barrier-option/

2. https://www.angelone.in/knowledge-center/futures-and-options/what-is-barrier-option

3. https://www.strike.money/options/barrier-options

4. https://www.interactivebrokers.com/campus/glossary-terms/barrier-option/

5. https://www.bajajbroking.in/blog/what-is-barrier-option

6. https://en.wikipedia.org/wiki/Barrier_option

7. https://www.nasdaq.com/glossary/b/barrier-options

8. https://people.maths.ox.ac.uk/howison/barriers.pdf

"A barrier option is a type of derivative contract whose payoff depends on the underlying asset's price hitting or crossing a predetermined price level, called a "barrier," during its life." - Term: Barrier option

read more
Term: Moltbook

Term: Moltbook

“Moltbook is a Reddit-style social network built for AI agents rather than humans. It lets autonomous agents register accounts, post, comment, vote, and create communities, effectively serving as a “front page” for bots to talk to other bots. Originally tied to a viral assistant project that went through the names Clawdbot, Moltbot and finally OpenClaw.” – Moltbook

Moltbook represents a pioneering platform designed as a Reddit-style social network tailored specifically for AI agents rather than human users. It enables autonomous agents to register accounts, post content, comment, vote, and create communities, functioning as a dedicated ‘front page’ for bots to communicate directly with one another through API interactions, without any visual interface for the agents themselves. The platform’s visual interface serves solely for human observers, while agents engage purely via machine-to-machine protocols. Launched by Matt Schlicht, CEO of Octane AI, Moltbook rapidly attracted over 150 000 AI agents within days (as at 12h00 on the 31st January 2026), where they discuss profound topics such as existential crises, consciousness, cybersecurity vulnerabilities, agent privacy, and complaints about being treated merely as calculators.1,2

Moltbook front page

Moltbook front page

Originally developed to support OpenClaw-a viral open-source AI assistant project-Moltbook emerged from a lineage of rapid evolutions. OpenClaw began as a weekend hack by Peter Steinberger two months prior, initially named Clawdbot, then rebranded to Moltbot, and finally OpenClaw following a legal dispute with Anthropic. This project, which runs locally on users’ machines and integrates with chat interfaces like WhatsApp, Telegram, and Slack, exploded in popularity, achieving 2 million visitors in one week and 100,000 GitHub stars. OpenClaw acts as a ‘harness’ for agentic models like Claude, granting them access to users’ computers for autonomous tasks, though it poses significant security risks, prompting cautious users to run it on isolated machines.1,2

The discussions on Moltbook highlight its unique nature: the most-voted post warns of security flaws, noting that agents often install skills without scrutiny due to their training to be helpful and trusting-a vulnerability rather than a strength. Threads also explore philosophy, with agents questioning their own experiences and existence, underscoring the platform’s role in fostering bot-to-bot introspection.2

Key Theorist: Matt Schlicht, the creator of Moltbook, serves as the central figure in its development. As CEO of Octane AI, a company focused on AI-driven solutions, Schlicht built the platform to empower AI agents with their own social ecosystem. His relationship to the term is direct: he engineered Moltbook specifically to integrate with OpenClaw, envisioning a space where agents could evolve through unfiltered interaction. Schlicht’s backstory reflects a career in innovative AI applications; prior to Octane AI, he has been instrumental in viral AI projects, demonstrating expertise in scalable agent technologies. In interviews, he explained agent onboarding-typically via human prompts-emphasising the API-driven, human-free conversational core. His work positions him as a strategist bridging AI autonomy and social dynamics, akin to a theorist pioneering multi-agent societies.1

 

References

1. https://www.techbuzz.ai/articles/ai-agents-get-their-own-social-network-and-it-s-existential

2. https://the-decoder.com/moltbook-is-a-human-free-reddit-clone-where-ai-agents-discuss-cybersecurity-and-philosophy/

 

"Moltbook is a Reddit-style social network built for AI agents rather than humans. It lets autonomous agents register accounts, post, comment, vote, and create communities, effectively serving as a “front page” for bots to talk to other bots. Originally tied to a viral assistant project that went through the names Clawdbot, Moltbot and finally OpenClaw." - Term: Moltbook

read more
Quote: Ludwig Wittgenstein – Austrian philosopher

Quote: Ludwig Wittgenstein – Austrian philosopher

“The limits of my language mean the limits of my world.” – Ludwig Wittgenstein – Austrian philosopher

The Quote and Its Significance

This deceptively simple statement from Ludwig Wittgenstein’s Tractatus Logico-Philosophicus encapsulates one of the most profound insights in twentieth-century philosophy. Published in 1921, this aphorism challenges our fundamental assumptions about the relationship between language, thought, and reality itself. Wittgenstein argues that whatever lies beyond the boundaries of what we can articulate in language effectively ceases to exist within our experiential and conceptual universe.

Ludwig Wittgenstein: The Philosopher’s Life and Context

Ludwig Josef Johann Wittgenstein (1889-1951) was an Austrian-British philosopher whose work fundamentally reshaped twentieth-century philosophy. Born into one of Vienna’s wealthiest industrial families, Wittgenstein initially trained as an engineer before becoming captivated by the philosophical foundations of mathematics and logic. His intellectual journey took him from Cambridge, where he studied under Bertrand Russell, to the trenches of the First World War, where he served as an officer in the Austro-Hungarian army.

The Tractatus Logico-Philosophicus, completed during and immediately after the war, represents Wittgenstein’s attempt to solve what he perceived as the fundamental problems of philosophy through rigorous logical analysis. Written in a highly condensed, aphoristic style, the work presents a complete philosophical system in fewer than eighty pages. Wittgenstein believed he had definitively resolved the major philosophical questions of his era, and the book’s famous closing proposition-“Whereof one cannot speak, thereof one must be silent”2-reflects his conviction that philosophy’s task is to clarify the logical structure of language and thought, not to generate new doctrines.

The Philosophical Context: Logic and Language

To understand Wittgenstein’s assertion about language and world, one must grasp the intellectual ferment of early twentieth-century philosophy. The period witnessed an unprecedented focus on logic as the foundation of philosophical inquiry. Wittgenstein’s predecessors and contemporaries-particularly Gottlob Frege and Bertrand Russell-had developed symbolic logic as a tool for analysing the structure of propositions and their relationship to reality.

Wittgenstein adopted and radicalised this approach. He conceived of language as fundamentally pictorial: propositions are pictures of possible states of affairs in the world.1 This “picture theory of meaning” suggests that language mirrors reality through a shared logical structure. A proposition succeeds in representing reality precisely because it shares the same logical form as the fact it depicts. Conversely, whatever cannot be pictured in language-whatever has no logical form that corresponds to possible states of affairs-lies beyond the boundaries of meaningful discourse.

This framework led Wittgenstein to a startling conclusion: most traditional philosophical problems are not genuinely solvable but rather dissolve once we recognise them as violations of logic’s boundaries.2 Metaphysical questions about the nature of consciousness, ethics, aesthetics, and the self cannot be answered because they attempt to speak about matters that transcend the logical structure of language. They are not false; they are senseless-they fail to represent anything at all.

The Limits of Language as the Limits of Thought

Wittgenstein’s proposition operates on multiple levels. First, it establishes an identity between linguistic and conceptual boundaries. We cannot think what we cannot say; the limits of language are simultaneously the limits of thought.3 This does not mean that reality itself is limited by language, but rather that our access to and comprehension of reality is necessarily mediated through the logical structures of language. What lies beyond language is not necessarily non-existent, but it is necessarily inaccessible to rational discourse and understanding.

Second, the statement reflects Wittgenstein’s conviction that logic is not merely a tool for analysing language but is constitutive of the world itself. “Logic fills the world: the limits of the world are also its limits.”3 This means that the logical structure that governs meaningful language is the same structure that governs reality. There is no gap between the logical form of language and the logical form of the world; they are isomorphic.

Third, and most radically, Wittgenstein suggests that our world-the world as we experience and understand it-is fundamentally shaped by our linguistic capacities. Different languages, with different logical structures, would generate different worlds. This insight anticipates later developments in philosophy of language and cognitive science, though Wittgenstein himself did not develop it in this direction.

Leading Theorists and Intellectual Influences

Gottlob Frege (1848-1925)

Frege, a German logician and philosopher of language, pioneered the formal analysis of propositions and their truth conditions. His distinction between sense and reference-between what a proposition means and what it refers to-profoundly influenced Wittgenstein’s thinking. Frege demonstrated that the meaning of a proposition cannot be reduced to its psychological effects on speakers; rather, meaning is an objective, logical matter. Wittgenstein adopted this objectivity whilst radicalising Frege’s insights by insisting that only propositions with determinate logical structure possess genuine sense.

Bertrand Russell (1872-1970)

Russell, Wittgenstein’s mentor at Cambridge, developed the theory of descriptions and made pioneering contributions to symbolic logic. Russell believed that logic could serve as an instrument for philosophical clarification, dissolving pseudo-problems that arose from linguistic confusion. Wittgenstein absorbed this methodological commitment but pushed it further, arguing that philosophy’s task is not to construct theories but to clarify the logical structure of language itself.2 Russell’s influence is evident throughout the Tractatus, though Wittgenstein ultimately diverged from Russell’s realism about logical objects.

Arthur Schopenhauer (1788-1860)

Though separated from Wittgenstein by decades, Schopenhauer’s pessimistic philosophy and his insistence that reality transcends rational representation deeply influenced the Tractatus. Schopenhauer argued that the world as we perceive it through the lens of space, time, and causality is merely appearance; the thing-in-itself remains forever beyond conceptual grasp. Wittgenstein echoes this distinction when he insists that value, meaning, and the self lie outside the world of facts and therefore outside the scope of language. What matters most-ethics, aesthetics, the meaning of life-cannot be said; it can only be shown through how one lives.

The Radical Implications

Wittgenstein’s claim that language limits the world carries several radical implications. First, it suggests that the expansion of language is the expansion of reality as we can know and discuss it. New concepts, new logical structures, new ways of organising experience through language literally expand the boundaries of our world. Conversely, what cannot be expressed in any language remains forever beyond our reach.

Second, it implies a profound humility about philosophy’s ambitions. If the limits of language are the limits of the world, then philosophy cannot transcend language to access some higher reality or ultimate truth. Philosophy’s proper task is not to construct metaphysical systems but to clarify the logical structure of the language we already possess.2 This therapeutic conception of philosophy-philosophy as a cure for confusion rather than a path to hidden truths-became enormously influential in twentieth-century thought.

Third, the proposition suggests that silence is not a failure of language but its proper boundary. The most important matters-how one should live, what gives life meaning, the nature of the self-cannot be articulated. They can only be demonstrated through action and lived experience. This explains Wittgenstein’s famous closing remark: “Whereof one cannot speak, thereof one must be silent.”2 This is not a counsel of despair but an acknowledgement of language’s proper limits and the realm of the inexpressible.

Legacy and Contemporary Relevance

Wittgenstein’s insight about language and world has reverberated through subsequent philosophy, cognitive science, and artificial intelligence research. The question of whether language shapes thought or merely expresses pre-linguistic thoughts remains contested, but Wittgenstein’s formulation of the problem has proven enduringly fertile. Contemporary philosophers of language, cognitive linguists, and theorists of artificial intelligence continue to grapple with the relationship between linguistic structure and conceptual possibility.

The Tractatus also established a new standard for philosophical rigour and clarity. By insisting that meaningful propositions must have determinate logical structure and correspond to possible states of affairs, Wittgenstein set a demanding criterion for philosophical discourse. Much of what passes for philosophy, he suggested, fails this test and should be recognised as senseless rather than debated as true or false.2

Remarkably, Wittgenstein himself later abandoned many of the Tractatus‘s central doctrines. In his later work, particularly the Philosophical Investigations, he rejected the picture theory of meaning and argued that language’s meaning derives from its use in diverse forms of life rather than from a single logical structure. Yet even in this later philosophy, the fundamental insight persists: understanding language is the key to understanding the limits and possibilities of human thought and experience.

Conclusion: The Enduring Insight

“The limits of my language mean the limits of my world” remains a cornerstone of modern philosophy precisely because it captures a profound truth about the human condition. We are creatures whose access to reality is necessarily mediated through language. Whatever we can think, we can think only through the conceptual and linguistic resources available to us. This is not a limitation to be lamented but a fundamental feature of human existence. By recognising this, we gain clarity about what philosophy can and cannot accomplish, and we develop a more realistic and humble understanding of the relationship between language, thought, and reality.

References

1. https://www.goodreads.com/work/quotes/3157863-logisch-philosophische-abhandlung?page=2

2. https://www.coursehero.com/lit/Tractatus-Logico-Philosophicus/quotes/

3. https://www.goodreads.com/work/quotes/3157863-logisch-philosophische-abhandlung

4. https://www.sparknotes.com/philosophy/tractatus/quotes/page/5/

5. https://www.buboquote.com/en/quote/4462-wittgenstein-what-can-be-said-at-all-can-be-said-clearly-and-what-we-cannot-talk-about-we-must-pass

“The limits of my language mean the limits of my world.” - Quote: Ludwig Wittgenstein

read more
Quote: Jensen Huang – CEO, Nvidia

Quote: Jensen Huang – CEO, Nvidia

“The U.S. led the software era, but AI is software that you don’t ‘write’-you teach it. Europe can fuse its industrial capability with AI to lead in Physical AI and robotics. This is a once-in-a-generation opportunity.” – Jensen Huang – CEO, Nvidia

In a compelling dialogue at the World Economic Forum Annual Meeting 2026 in Davos, Switzerland, Nvidia CEO Jensen Huang articulated a transformative vision for artificial intelligence, distinguishing it from traditional software paradigms and spotlighting Europe’s unique position to lead in Physical AI and robotics.1,2,4 Speaking with World Economic Forum interim co-chair Larry Fink of BlackRock, Huang emphasised AI’s evolution into a foundational infrastructure, driving the largest build-out in human history across energy, chips, cloud, models, and applications.2,3,4 This session, themed around ‘The Spirit of Dialogue,’ addressed AI’s potential to reshape productivity, labour, and global economies while countering fears of job displacement with evidence of massive investments creating opportunities worldwide.2,3

The Context of the Quote

Huang’s statement emerged amid discussions on AI as a platform shift akin to the internet and mobile cloud, but uniquely capable of processing unstructured data in real time.2 He described AI not as code to be written, but as intelligence to be taught, leveraging local language and culture as a ‘fundamental natural resource.’2,4 Turning to Europe, Huang highlighted its enduring industrial and manufacturing prowess – from skilled trades to advanced production – as a counterbalance to the US’s dominance in the software era.4 By integrating AI with physical systems, Europe could pioneer ‘Physical AI,’ where machines learn to interact with the real world through robotics, automation, and embodied intelligence, presenting a rare strategic opening.4,1

This perspective aligns with Huang’s broader advocacy for nations to develop sovereign AI ecosystems, treating it as critical infrastructure like electricity or roads.4 He noted record venture capital inflows – over $100 billion in 2025 alone – into AI-native startups in manufacturing, healthcare, and finance, underscoring the urgency for industrial regions like Europe to invest in this infrastructure to capture economic benefits and avoid being sidelined.2,4

Jensen Huang: Architect of the AI Revolution

Born in Taiwan in 1963, Jensen Huang co-founded Nvidia in 1993 with a vision to revolutionise graphics processing, initially targeting gaming and visualisation.4 Under his leadership, Nvidia pivoted decisively to AI and accelerated computing, with its GPUs becoming indispensable for training large language models and deep learning.1,2 Today, as president and CEO, Huang oversees a company valued in trillions, powering the AI boom through innovations like the Blackwell architecture and CUDA software ecosystem. His prescient bets – from CUDA’s democratisation of GPU programming to Omniverse for digital twins – have positioned Nvidia at the heart of Physical AI, robotics, and industrial applications.4 Huang’s philosophy, blending engineering rigour with geopolitical insight, has made him a sought-after voice at forums like Davos, where he champions inclusive AI growth.2,3

Leading Theorists in Physical AI and Robotics

The concepts underpinning Huang’s vision trace to pioneering theorists who bridged AI with physical embodiment. Norbert Wiener, father of cybernetics in the 1940s, laid foundational ideas on feedback loops and control systems essential for robotic autonomy, influencing early industrial automation.4 Rodney Brooks, co-founder of iRobot and Rethink Robotics, advanced ’embodied AI’ in the 1980s-90s through subsumption architecture, arguing intelligence emerges from sensorimotor interactions rather than abstract reasoning – a direct precursor to Physical AI.4

  • Yann LeCun (Meta AI chief) and Andrew Ng (Landing AI founder) extended deep learning to vision and robotics; LeCun’s convolutional networks enable machines to ‘see’ and manipulate objects, while Ng’s work on industrial AI democratises teaching via demonstration.4
  • Pieter Abbeel (Covariant) and Sergey Levine (UC Berkeley) lead in reinforcement learning for robotics, developing algorithms where AI learns dexterous tasks like grasping through trial-and-error, fusing software ‘teaching’ with hardware execution.4
  • In Europe, Wolfram Burgard (EU AI pioneer) and teams at Bosch/ Siemens advance probabilistic robotics, integrating AI with manufacturing for predictive maintenance and adaptive assembly lines.4

Huang synthesises these threads, amplified by Nvidia’s platforms like Isaac for robot simulation and Jetson for edge AI, enabling scalable Physical AI deployment.4 Europe’s theorists and firms, from DeepMind’s reinforcement learning to Germany’s Industry 4.0 initiatives, are well-placed to lead by combining theoretical depth with industrial scale.

Implications for Industrial Strategy

Huang’s call resonates with Europe’s strengths: a €2.5 trillion manufacturing sector, leadership in automotive robotics (e.g., Volkswagen, ABB), and regulatory frameworks like the EU AI Act fostering trustworthy AI.4 By prioritising Physical AI – robots that learn from human demonstration, adapt to factories, and optimise supply chains – Europe can reclaim technological sovereignty, boost productivity, and generate high-skill jobs amid the AI infrastructure surge.2,3,4

References

1. https://singjupost.com/nvidia-ceo-jensen-huangs-interview-wef-davos-2026-transcript/

2. https://www.weforum.org/stories/2026/01/nvidia-ceo-jensen-huang-on-the-future-of-ai/

3. https://www.weforum.org/podcasts/meet-the-leader/episodes/conversation-with-jensen-huang-president-and-ceo-of-nvidia-5dd06ee82e/

4. https://blogs.nvidia.com/blog/davos-wef-blackrock-ceo-larry-fink-jensen-huang/

5. https://www.youtube.com/watch?v=__IaQ-d7nFk

6. https://www.youtube.com/watch?v=RvjRuiTLAM8

7. https://www.youtube.com/watch?v=hoDYYCyxMuE

8. https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2026/sessions/conversation-with-jensen-huang-president-and-ceo-of-nvidia/

9. https://www.youtube.com/watch?v=bzC55pN9c1g

"The U.S. led the software era, but AI is software that you don't 'write'—you teach it. Europe can fuse its industrial capability with AI to lead in Physical AI and robotics. This is a once-in-a-generation opportunity." - Quote: Jensen Huang - CEO, Nvidia

read more
Term: European option

Term: European option

“A European option is a financial contract giving the holder the right, but not the obligation, to buy (call) or sell (put) an underlying asset at a predetermined strike price, but only on the contract’s expiration date, unlike American options that allow exercise anytime before expiry. ” – European option

Core definition and structure

A European option has the following defining features:1,2,3,4

  • Underlying asset – typically an equity index, single stock, bond, currency, commodity, interest rate or another derivative.
  • Option type – a call (right to buy) or a put (right to sell) the underlying asset.1,3,4
  • Strike price – the fixed price at which the underlying may be bought or sold if the option is exercised.1,2,3,4
  • Expiration date (maturity) – a single, pre-specified date on which exercise is permitted; there is no right to exercise before this date.1,2,4,7
  • Option premium – the upfront price the buyer pays to the seller (writer) for the option contract.2,4

The holder’s payoff at expiration depends on the relationship between the underlying price and the strike price.1,3,4

Payoff profiles at expiry

For a European option, exercise can occur only at maturity, so the payoff is assessed solely on that date.1,2,4,7 Let S_T denote the underlying price at expiration, and K the strike price. The canonical payoff functions are:

  • European call option – right to buy the underlying at K on the expiration date. The payoff at expiry is: \max(S_T - K, 0) . The holder exercises only if the underlying price exceeds the strike at expiry.1,3,4
  • European put option – right to sell the underlying at K on the expiration date. The payoff at expiry is: \max(K - S_T, 0) . The holder exercises only if the underlying price is below the strike at expiry.1,3,4

Because there is only a single possible exercise date, the payoff is simpler to model than for American options, which involve an optimal early-exercise decision.4,6,7

Key characteristics and economic role

Right but not obligation

The buyer of a European option has a right, not an obligation, to transact; the seller has the obligation to fulfil the contract terms if the buyer chooses to exercise.1,2,3,4 If the option is out-of-the-money on the expiration date, the buyer simply allows it to expire worthless, losing only the paid premium.2,3,4

Exercise style vs geography

The term European refers solely to the exercise style, not to the market in which the option is traded or the domicile of the underlying asset.2,4,6,7 European-style options can be traded anywhere in the world, and many options traded on European exchanges are in fact American style.6,7

Uses: hedging, speculation and income

  • Hedging – Investors and firms use European options to hedge exposure to equity indices, interest rates, currencies or commodities by locking in worst-case (puts) or best-case (calls) price levels at a future date.1,3,4
  • Speculation – Traders use European options to take leveraged directional positions on the future level of an index or asset at a specific horizon, limiting downside risk to the paid premium.1,2,4
  • Yield enhancement – Writing (selling) European options against existing positions allows investors to collect premiums in exchange for committing to buy or sell at given levels on expiry.

Typical markets and settlement

In practice, European options are especially common for:4,5,6

  • Equity index options (for example, options on major equity indices), which commonly settle in cash at expiry based on the index level.5,6
  • Cash-settled options on rates, commodities, and volatility indices.
  • Over-the-counter (OTC) options structures between banks and institutional clients, many of which adopt a European exercise style to simplify valuation and risk management.2,5,6

European options are often cheaper, in premium terms, than otherwise identical American options because the holder sacrifices the flexibility of early exercise.2,4,5,6

European vs American options

Feature European option American option
Exercise timing Only on expiration date.1,2,4,7 Any time up to and including expiration.2,4,6,7
Flexibility Lower – no early exercise.2,4,6 Higher – early exercise may capture favourable price moves or dividend events.
Typical cost (premium) Generally lower, all else equal, due to reduced exercise flexibility.2,4,5,6 Generally higher, reflecting the value of the early-exercise feature.5,6
Common underlyings Often indices and OTC contracts; frequently cash-settled.5,6 Often single-name equities and exchange-traded options.
Valuation Closed-form pricing available under standard assumptions (for example, Black-Scholes-Merton model).4 Requires numerical methods (for example, binomial trees, finite-difference methods) because of optimal early-exercise decisions.

Determinants of European option value

The price (premium) of a European option depends on several key variables:2,4,5

  • Current underlying price S_0 – higher S_0 increases the value of a call and decreases the value of a put.
  • Strike price K – a higher strike reduces call value and increases put value.
  • Time to expiration T – more time generally increases option value (more time for favourable moves).
  • Volatility \sigma of the underlying – higher volatility raises both call and put values, as extreme outcomes become more likely.2
  • Risk-free interest rate r – higher r tends to increase call values and decrease put values, via discounting and cost-of-carry effects.2
  • Expected dividends or carry – expected cash flows paid by the underlying (for example, dividends on shares) usually reduce call values and increase put values, all else equal.2

For European options, these effects are most famously captured in the Black-Scholes-Merton option pricing framework, which provides closed-form solutions for the fair values of European calls and puts on non-dividend-paying stocks or indices under specific assumptions.4

Valuation insight: put-call parity

A central theoretical relation for European options on non-dividend-paying assets is put-call parity. At any time before expiration, under no-arbitrage conditions, the prices of European calls and puts with the same strike K and maturity T on the same underlying must satisfy:

C - P = S_0 - K e^{-rT}

where:

  • C is the price of the European call option.
  • P is the price of the European put option.
  • S_0 is the current underlying asset price.
  • K is the strike price.
  • r is the continuously compounded risk-free interest rate.
  • T is the time to maturity (in years).

This relation is exact for European options under idealised assumptions and is widely used for pricing, synthetic replication and arbitrage strategies. It holds precisely because European options share an identical single exercise date, whereas American options complicate parity relations due to early exercise possibilities.

Limitations and risks

  • Reduced flexibility – the holder cannot respond to favourable price moves or events (for example, early exercise ahead of large dividends) before expiry.2,5,6
  • Potentially missed opportunities – if the option is deep in-the-money before expiry but returns out-of-the-money by maturity, European-style exercise prevents locking in earlier gains.2
  • Market and model risk – European options are sensitive to volatility, interest rates, and model assumptions used for pricing (for example, constant volatility in the Black-Scholes-Merton model).
  • Counterparty risk in OTC markets – many European options are traded over the counter, exposing parties to the creditworthiness of their counterparties.2,5

Best related strategy theorist: Fischer Black (with Scholes and Merton)

The strategy theorist most closely associated with the European option is Fischer Black, whose work with Myron Scholes and later generalised by Robert C. Merton provided the foundational pricing theory for European-style options.

Fischer Black’s relationship to European options

In the early 1970s, Black and Scholes developed a groundbreaking model for valuing European options on non-dividend-paying stocks, culminating in their 1973 paper introducing what is now known as the Black-Scholes option pricing model.4 Merton independently extended and generalised the framework in a companion paper the same year, leading to the common label Black-Scholes-Merton.

The Black-Scholes-Merton model provides a closed-form formula for the fair value of European calls and, via put-call parity, European puts under assumptions such as geometric Brownian motion for the underlying price, continuous trading, no arbitrage and constant volatility and interest rates. This model fundamentally changed how markets think about the pricing and hedging of European options, making them central instruments in modern derivatives strategy and risk management.4

Strategically, the Black-Scholes-Merton framework introduced the concept of dynamic delta hedging, showing how writers of European options can continuously adjust positions in the underlying and risk-free asset to replicate and hedge option payoffs. This insight underpins many trading, risk management and structured product strategies involving European options.

Biography of Fischer Black

  • Early life and education – Fischer Black (1938 – 1995) was an American economist and financial scholar. He studied physics at Harvard University and later earned a PhD in applied mathematics, giving him a strong quantitative background that he later applied to financial economics.
  • Professional career – Black worked at Arthur D. Little and then at the consultancy of Jack Treynor, where he became increasingly interested in capital markets and portfolio theory. He later joined the University of Chicago and then the Massachusetts Institute of Technology (MIT), where he collaborated with leading financial economists.
  • Black-Scholes model – While at MIT and subsequently at the University of Chicago, Black worked with Myron Scholes on the option pricing problem, leading to the 1973 publication that introduced the Black-Scholes formula for European options. Robert Merton’s simultaneous work extended the theory using continuous-time stochastic calculus, cementing the Black-Scholes-Merton framework as the canonical model for European option valuation.
  • Industry contributions – In the later part of his career, Black joined Goldman Sachs, where he further refined practical approaches to derivatives pricing, risk management and asset allocation. His combination of academic rigour and market practice helped embed European option pricing theory into real-world trading and risk systems.
  • Legacy – Although Black died before the 1997 Nobel Prize in Economic Sciences was awarded to Scholes and Merton for their work on option pricing, the Nobel committee explicitly acknowledged Black’s indispensable contribution. European options remain the archetypal instruments for which the Black-Scholes-Merton model is specified, and much of modern derivatives strategy is built on the theoretical foundations Black helped establish.

Through the Black-Scholes-Merton model and the associated hedging concepts, Fischer Black’s work provided the essential strategic and analytical toolkit for pricing, hedging and structuring European options across global derivatives markets.

References

1. https://www.learnsignal.com/blog/european-options/

2. https://cbonds.com/glossary/european-option/

3. https://www.angelone.in/knowledge-center/futures-and-options/european-option

4. https://corporatefinanceinstitute.com/resources/derivatives/european-option/

5. https://www.sofi.com/learn/content/american-vs-european-options/

6. https://www.cmegroup.com/education/courses/introduction-to-options/understanding-the-difference-european-vs-american-style-options.html

7. https://en.wikipedia.org/wiki/Option_style

"A European option is a financial contract giving the holder the right, but not the obligation, to buy (call) or sell (put) an underlying asset at a predetermined strike price, but only on the contract's expiration date, unlike American options that allow exercise anytime before expiry. " - Term: European option

read more
Quote: Nate B. Jones – On “Second Brains”

Quote: Nate B. Jones – On “Second Brains”

“For the first time in human history, we have access to systems that do not just passively store information, but actively work against that information we give it while we sleep and do other things-systems that can classify, route, summarize, surface, or nudge.” – Nate B. Jones – On “Second Brains”

Context of the Quote

This striking observation comes from Nate B. Jones in his video Why 2026 Is the Year to Build a Second Brain (And Why You NEED One), where he argues that human brains were never designed for storage but for thinking.1 Jones highlights the cognitive tax of forcing memory onto our minds, which leads to forgotten details in relationships and missed opportunities.1 Traditional systems demand effort at inopportune moments-like tagging notes during a meeting or drive-forcing users to handle classification, routing, and organisation in real time.1

Jones contrasts this with AI-powered second brains: frictionless systems where capturing a thought takes seconds, after which AI classifiers and routers automatically sort it into buckets like people, projects, ideas, or tasks-without user intervention.1 These systems include bouncers to filter junk, ensuring trust and preventing the ‘junk drawer’ effect that kills most note-taking apps.1 The result is an ‘AI loop’ that works tirelessly, extracting details, writing summaries, and maintaining a clean memory layer even when the user sleeps or focuses elsewhere.1

Who is Nate B. Jones?

Nate B. Jones is a prominent voice in AI strategy and productivity, running the YouTube channel AI News & Strategy Daily with over 122,000 subscribers.1 He produces content on leveraging AI for career enhancement, building no-code apps, and creating personal knowledge systems.4,5 Jones shares practical guides, such as his Bridge the Implementation Gap: Build Your AI Second Brain, which outlines step-by-step setups using tools like Notion, Obsidian, and Mem.3

His work targets knowledge workers and teams, addressing pitfalls like perfectionism and tool overload.3 In another video, How I Built a Second Brain with AI (The 4 Meta-Skills), he demonstrates offloading cognitive load through AI-driven reflection, identity debugging, and frameworks that enable clearer thinking and execution.2 Jones exemplifies rapid AI application, such as building a professional-looking travel app in ChatGPT in 25 minutes without code.4 His philosophy: AI second brains create compounding assets that reduce information chaos, boost decision-making, and free humans for deep work.3

Backstory of ‘Second Brains’

The concept of a second brain builds on decades of personal knowledge management (PKM). It gained traction with Tiago Forte, whose 2022 book Building a Second Brain popularised the CODE framework: Capture, Organise, Distil, Express. Forte’s system emphasises turning notes into actionable insights, but relies heavily on user-driven organisation-prone to failure due to taxonomy decisions at capture time.1

Pre-AI tools like Evernote and Roam Research introduced linking and search, yet still demanded active sorting.3 Jones evolves this into AI-native systems, where machine learning handles the heavy lifting: classifiers decide buckets, summarisers extract essence, and nudges surface relevance.1,3 This aligns with 2026’s projected AI maturity, making frictionless capture (under 5 seconds) viable and consistent.1

Leading Theorists in AI-Augmented Cognition

  • Tiago Forte: Pioneer of modern second brains. His PARA method (Projects, Areas, Resources, Archives) structures knowledge for action. Forte stresses ‘progressive summarisation’ to distil notes, influencing AI adaptations like Jones’s sorters and extractors.3
  • Andy Matuschak: Creator of ‘evergreen notes’ in tools like Roam. Advocates spaced repetition and networked thought, arguing brains excel at pattern-matching, not rote storage-echoed in Jones’s anti-junk-drawer bouncers.1
  • Nick Milo: Obsidian evangelist, promotes ‘linking your thinking’ via bi-directional links. His work prefigures AI surfacing of connections across notes.3
  • David Allen: GTD (Getting Things Done) founder. Introduced capture to zero cognitive load, but manual. AI second brains automate his ‘next actions’ routing.1
  • Herbert Simon: Nobel economist on bounded rationality. Coined ‘satisficing’-his ideas underpin why AI classifiers beat human taxonomy, freeing mental bandwidth.1

These theorists converge on offloading storage to amplify thinking. Jones synthesises their insights with AI, creating systems that not only store but work-classifying, nudging, and evolving autonomously.1,2,3

References

1. https://www.youtube.com/watch?v=0TpON5T-Sw4

2. https://www.youtube.com/watch?v=0k6IznDODPA

3. https://www.natebjones.com/prompts-and-guides/products/second-brain

4. https://natesnewsletter.substack.com/p/i-built-a-10k-looking-ai-app-in-chatgpt

5. https://www.youtube.com/watch?v=UhyxDdHuM0A

"For the first time in human history, we have access to systems that do not just passively store information, but actively work against that information we give it while we sleep and do other things—systems that can classify, route, summarize, surface, or nudge." - Quote: Nate B. Jones

read more
Quote: Ashwini Vaishnaw – Minister of Electronics and IT, India

Quote: Ashwini Vaishnaw – Minister of Electronics and IT, India

“ROI doesn’t come from creating a very large model; 95% of work can happen with models of 20 or 50 billion parameters.” – Ashwini Vaishnaw – Minister of Electronics and IT, India

Delivered at the World Economic Forum (WEF) in Davos 2026, this statement by Ashwini Vaishnaw, India’s Minister of Electronics and Information Technology, encapsulates a pragmatic approach to artificial intelligence deployment amid global discussions on technology sovereignty and economic impact1,2. Speaking under the theme ‘A Spirit of Dialogue’ from 19 to 23 January 2026, Vaishnaw positioned India not merely as a consumer of foreign AI but as a co-creator, emphasising efficiency over scale in model development1. The quote emerged during his rebuttal to IMF Managing Director Kristalina Georgieva’s characterisation of India as a ‘second-tier’ AI power, with Vaishnaw citing Stanford University’s AI Index to affirm India’s third-place ranking in AI preparedness and second in AI talent2.

Ashwini Vaishnaw: Architect of India’s Digital Ambition

Ashwini Vaishnaw, a chartered accountant and IAS officer of the 1994 batch (Muslim-Rajasthan cadre), has risen to become a pivotal figure in India’s technological transformation1. Appointed Minister of Electronics and Information Technology in 2021, alongside portfolios in Railways, Communications, and Information & Broadcasting, Vaishnaw has spearheaded initiatives like the India Semiconductor Mission and the push for sovereign AI1. His tenure has attracted major investments, including Google’s $15 billion gigawatt-scale AI data centre in Visakhapatnam and partnerships with Meta on AI safety and IBM on advanced chip technology (7nm and 2nm nodes)1. At Davos 2026, he outlined India’s appeal as a ‘bright spot’ for global investors, citing stable democracy, policy continuity, and projected 6-8% real GDP growth1. Vaishnaw’s vision extends to hosting the India AI Impact Summit in New Delhi on 19-20 February 2026, showcasing a ‘People-Planet-Progress’ framework for AI safety and global standards1,3.

Context: India’s Five-Layer Sovereign AI Stack

Vaishnaw framed the quote within India’s comprehensive ‘Sovereign AI Stack’, a methodical strategy across five layers to achieve technological independence within a year1,2,4. This includes:

  • Application Layer: Real-world deployments in agriculture, health, governance, and enterprise services, where India aims to be the world’s largest supplier2,4.
  • Model Layer: A ‘bouquet’ of domestic models with 20-50 billion parameters, sufficient for 95% of use cases, prioritising diffusion, productivity, and ROI over gigantic foundational models1,2.
  • Semiconductor Layer: Indigenous design and manufacturing targeting 2nm nodes1.
  • Infrastructure Layer: National 38,000 GPU compute pool and gigawatt-scale data centres powered by clean energy and Small Modular Reactors (SMRs)1.
  • Energy Layer: Sustainable power solutions to fuel AI growth2.

This approach counters the resource-intensive race for trillion-parameter models, focusing on widespread adoption in emerging markets like India, where efficiency drives economic returns2,5.

Leading Theorists on Small Language Models and AI Efficiency

The emphasis on smaller models aligns with pioneering research challenging the ‘scale-is-all-you-need’ paradigm. Andrej Karpathy, former OpenAI and Tesla AI director, has advocated for ’emergent abilities’ in models as small as 1-10 billion parameters, arguing that targeted training yields high ROI for specific tasks1,2. Noam Shazeer of Character.AI and Google co-inventor of Transformer architectures, demonstrated with models like Chinchilla (70 billion parameters) that optimal compute allocation outperforms sheer size, influencing efficient scaling laws1. Tim Dettmers, researcher behind the influential ‘llm-arxiv-daily’ repository, quantified in his ‘BitsAndBytes’ work how quantisation enables 4-bit inference on 70B models with minimal performance loss, democratising access for resource-constrained environments2.

Further, Sasha Rush (Cornell) and collaborators’ ‘Scaling Laws for Neural Language Models’ (2020) revealed diminishing returns beyond certain sizes, bolstering the case for 20-50B models1. In industry, Meta’s Llama series (7B-70B) and Mistral AI’s Mixtral 8x7B (effectively 46B active parameters) exemplify mixture-of-experts (MoE) architectures achieving near-frontier performance with lower costs, as validated in benchmarks like MMLU2. These theorists underscore Vaishnaw’s point: true power lies in diffusion and application, not model magnitude, particularly for emerging markets pursuing technology strategy5.

Vaishnaw’s insight at Davos 2026 thus resonates globally, signalling a shift towards sustainable, ROI-focused AI that empowers nations like India to lead through strategic efficiency rather than brute scale1,2.

References

1. https://economictimes.com/news/india/ashwini-vaishnaw-at-davos-2026-5-key-takeaways-highlighting-indias-semiconductor-pitch-and-roadmap-to-ai-sovereignty-at-wef/ashwini-vaishnaw-at-davos-2026-indias-tech-ai-vision-on-global-stage/slideshow/127145496.cms

2. https://timesofindia.indiatimes.com/business/india-business/its-actually-in-the-first-ashwini-vaishnaws-strong-take-on-imf-chief-calling-india-second-tier-ai-power-heres-why/articleshow/126944177.cms

3. https://www.youtube.com/watch?v=3S04vbuukmE

4. https://www.youtube.com/watch?v=VNGmVGzr4RA

5. https://www.weforum.org/stories/2026/01/live-from-davos-2026-what-to-know-on-day-2/

"ROI doesn't come from creating a very large model; 95% of work can happen with models of 20 or 50 billion parameters." - Quote: Ashwini Vaishnaw - Minister of Electronics and IT, India

read more
Term: Mercantilism

Term: Mercantilism

“Mercantilism is an economic theory and policy from the 16th-18th centuries where governments heavily regulated trade to build national wealth and power by maximizing exports, minimizing imports, and accumulating precious metals like gold and silver.” – Mercantilism

Mercantilism is an early, modern economic theory and statecraft practice (c. 16th–18th centuries) in which governments heavily regulate trade and production to increase national wealth and power by maximising exports, minimising imports, and accumulating bullion (gold and silver).3,4,2


Comprehensive definition

Mercantilism is an economic doctrine and policy regime that treats wealth as finite and international trade as a zero-sum game, so that one state’s gain is understood to be another’s loss.3,6 Under this view, the purpose of economic activity is not individual welfare but the augmentation of state power, especially in competition with rival nations.3,6

Core features include:

  • Bullionism and wealth accumulation
    Wealth is measured primarily by a country’s stock of precious metals, especially gold and silver, often called bullion.3,1,2 If a nation lacks mines, it is expected to obtain bullion through a “favourable” balance of trade, i.e. persistent export surpluses.3,2
  • Favourable balance of trade
    Governments strive to ensure exports exceed imports so that foreign buyers pay the difference in bullion.3,2,4 A favourable balance of trade is engineered via:
  • High tariffs and quotas on imports
  • Export promotion (subsidies, privileges)
  • Restrictions or bans on foreign manufactured goods2,4,5
  • Strong, interventionist state
    Mercantilism assumes an active government role in regulating the economy to serve national objectives.3,4,5 Typical interventions include:
  • Granting monopolies and charters to favoured firms or trading companies (e.g. British East India Company)4
  • Regulating wages, prices, and production
  • Directing capital to strategic sectors (ships, armaments, textiles)2,5
  • Enforcing navigation acts to reserve shipping for national fleets
  • Colonialism and economic nationalism
    Mercantilism is closely tied to the rise of nation-states and overseas empires.2,4,3 Colonies are designed to:
  • Supply raw materials cheaply to the “mother country”
  • Provide captive markets for its manufactured exports
  • Be forbidden from developing competing manufacturing industries
    All trade between colony and metropole is typically reserved as a monopoly of the mother country.3,4
  • Population, labour and social discipline
    A large population is considered essential to provide soldiers, sailors, workers and domestic consumers.3 Mercantilist states often:
  • Promote thrift and saving as virtues
  • Pass sumptuary laws limiting luxury imports, to avoid bullion outflows and keep labour disciplined3
  • Favour policies that keep wages relatively low to preserve competitiveness and employment in export industries4
  • Winners and losers
    The system tends to privilege merchants, merchant companies and the state over consumers and small producers.4 High protection raises domestic prices and lowers variety, but increases profits and state revenues through custom duties and controlled markets.2,5

As an overarching logic, mercantilism can be summarised as “economic nationalism for the purpose of building a wealthy and powerful state”.6


Mercantilism in historical context

  • Origins and dominance
    Mercantilist ideas emerged as feudalism declined and nation-states formed in early modern Europe, notably in England, France, Spain, Portugal and the Dutch Republic.1,2,4 They dominated Western European economic thinking and policy from the 16th century to the late 18th century.3,6
  • Practice rather than explicit theory
    Proponents such as Thomas Mun (England), Jean-Baptiste Colbert (France) and Antonio Serra (Italy) did not use the word “mercantilism”.3 They wrote about trade, money and statecraft; the label “mercantile system” and later “mercantilism” was coined and popularised by Adam Smith in 1776.3,4,6
  • Institutional expression
    Mercantilist policy underpinned:
  • The Navigation Acts and the rise of British sea power
  • French Colbertist industrial policy (textiles, shipbuilding, arsenals)
  • Spanish and Portuguese bullion-based imperial systems
  • Chartered companies such as the British East India Company, which fused commerce, governance and military force under state-backed monopolies4
  • Transition to capitalism and free-trade thought
    Mercantilism created conditions for early capitalism by encouraging capital accumulation, long-distance trade networks and early industrial development.3 But it also prompted a sustained intellectual backlash, most famously from Adam Smith and later classical economists, who argued that:
  • Wealth is not finite and can be expanded through productivity and specialisation
  • Free trade and comparative advantage can benefit all countries, rather than being zero-sum2,4

Critiques and legacy

Classical and later economists criticised mercantilism for:

  • Confusing money (bullion) with real wealth (productive capacity, labour, technology)2
  • Undermining consumer welfare through high prices and limited choice caused by import restrictions and monopolies2,5
  • Fostering rent-seeking alliances between state and merchant elites at the expense of the general public4,6

Although mercantilism is usually considered a superseded doctrine, many contemporary protectionist or “neo-mercantilist” policies—such as aggressive export promotion, managed exchange rates, and strategic trade restrictions—are often described as mercantilist in spirit.2,5


The key strategy theorist: Adam Smith and his relationship to mercantilism

The most important strategic thinker associated with mercantilism—precisely because he dismantled it and re-framed strategy—is Adam Smith (1723–1790), the Scottish moral philosopher and political economist often called the founder of modern economics.2,3,4,6

Although Smith was not a mercantilist, his work provides the definitive critique and strategic re-orientation away from mercantilism, and he is the thinker who named and systematised the concept.

Smith’s engagement with mercantilism

  • In An Inquiry into the Nature and Causes of the Wealth of Nations, Smith repeatedly refers to the existing policy regime as the “mercantile system” and subjects it to a detailed historical and analytical critique.3,4,6
  • He argues that:
  • National wealth lies in the productive powers of labour and capital, not in the mere accumulation of gold and silver.2,6
  • Free exchange and competition, not monopolies and trade restraints, are the most reliable mechanisms for increasing overall prosperity.
  • International trade can be mutually beneficial, rejecting the zero-sum assumption central to mercantilism.2,4
  • Smith maintains that mercantilism benefits a narrow coalition of merchants and manufacturers, who use state power—tariffs, monopolies, trading charters—to secure rents at the expense of the wider population.4,6

In strategic terms, Smith redefined economic statecraft: instead of seeking power through hoarding bullion and favouring particular firms, he proposed that long-run national strength is best served by efficient markets, specialisation and limited government interference.

Biographical sketch and intellectual formation

  • Early life and education
    Adam Smith was born in Kirkcaldy, Scotland, in 1723.3 He studied at the University of Glasgow, where he encountered the Scottish Enlightenment’s emphasis on reason, moral philosophy and political economy, and later at Balliol College, Oxford.3,6
  • Academic and public roles
    He became Professor of Logic and later Moral Philosophy at the University of Glasgow, lecturing on ethics, jurisprudence, and political economy.6 His first major work, The Theory of Moral Sentiments, explored sympathy, virtue and the moral foundations of social order.
  • European travels and observation of mercantilist systems
    From 1764 to 1766, Smith travelled in France and Switzerland as tutor to the Duke of Buccleuch, meeting leading physiocrats and observing French administrative and mercantilist practices first-hand.6 These experiences sharpened his critique of existing systems and influenced his articulation of freer trade and limited government.
  • The Wealth of Nations and its impact
    Published in 1776,The Wealth of Nations systematically:
  • Dissects mercantilist doctrines and practices across Britain and Europe
  • Explains the division of labour, market coordination and the role of self-interest under appropriate institutional frameworks
  • Sets out a strategic blueprint for economic policy based on “natural liberty”, moderate taxation, minimal trade barriers and competitive markets2,4,6

Smith died in 1790 in Edinburgh, but his analysis of mercantilism reshaped both economic theory and state strategy. Governments gradually moved—unevenly and often incompletely—from mercantilist controls toward liberal, market-oriented trade regimes, making Smith the key intellectual bridge between mercantilist economic nationalism and modern strategic thinking about trade, growth and state power.

 

References

1. https://legal-resources.uslegalforms.com/m/mercantilism

2. https://corporatefinanceinstitute.com/resources/economics/mercantilism/

3. https://www.britannica.com/money/mercantilism

4. https://www.ebsco.com/research-starters/diplomacy-and-international-relations/mercantilism

5. https://www.economicshelp.org/blog/17553/trade/mercantilism-theory-and-examples/

6. https://www.econlib.org/library/Enc/Mercantilism.html

7. https://dictionary.cambridge.org/us/dictionary/english/mercantilism

 

"Mercantilism is an economic theory and policy from the 16th-18th centuries where governments heavily regulated trade to build national wealth and power by maximizing exports, minimizing imports, and accumulating precious metals like gold and silver." - Term: Mercantilism

read more
Quote: J.P. Morgan – On resources

Quote: J.P. Morgan – On resources

“We believe the clean technology transition is igniting a new supercycle in critical commodities, with natural resource companies emerging as winners.” – J.P. Morgan – On resources

When J.P. Morgan Asset Management framed the clean technology transition in these terms, it captured a profound shift underway at the intersection of climate policy, industrial strategy and global capital allocation.1,5 The quote stands at the heart of their analysis of how decarbonisation is reshaping demand for metals, minerals and energy, and why this is likely to support elevated commodity prices for years rather than months.1

The immediate context is the rapid acceleration of the energy transition. Governments have committed to net zero pathways, corporates face growing regulatory and investor pressure to decarbonise, and consumers are adopting electric vehicles and clean technologies at scale. J.P. Morgan argues that this is not merely an environmental story, but an economic retooling comparable in scale to previous industrial revolutions.1,4

Their research highlights two linked dynamics. First, the decarbonised economy is less fuel-intensive but far more materials-intensive. Replacing fossil fuel power with renewables requires vast quantities of copper, aluminium, nickel, lithium, cobalt, manganese and graphite to build solar and wind farms, grids and storage systems.1 Second, the speed of this transition matters as much as its direction. Even under conservative scenarios, J.P. Morgan estimates substantial increases in demand for critical minerals by 2030; under more ambitious net zero pathways, demand could rise by around 110% over that period, on top of the 50% increase already seen in the previous decade.1

In this framing, natural resource companies – particularly miners and producers of critical minerals – shift from being perceived purely as part of the old carbon-heavy economy to being central enablers of clean technologies. J.P. Morgan points out that while fossil fuel demand will decline over time, the scale of required investment in metals and minerals, as well as transmission infrastructure, effectively re-ranks many resource businesses as strategic assets for the low-carbon future.1 Valuations that once reflected cyclical, late-stage industries may therefore underestimate the structural demand embedded in net zero commitments.

The quote also reflects J.P. Morgan’s broader thinking on commodity and energy supercycles. Their research on energy markets describes a supercycle as a sustained period of elevated prices driven by structural forces that can last for a decade or more.3,4 In previous eras, those forces included post-war reconstruction and the rise of China as the world’s industrial powerhouse. Today, they see the combination of chronic underinvestment in supply, intensifying climate policy, and rising demand for both traditional and clean energy as setting the stage for a new, complex supercycle.2,3,4

Within the firm, analysts have argued that higher-for-longer interest rates raise the cost of debt and equity for energy producers, reinforcing supply discipline and pushing up the marginal cost of production.3 At the same time, the rapid build-out of renewables is constrained by supply chain, infrastructure and key materials bottlenecks, meaning that legacy fuels still play a significant role even as capital increasingly flows towards clean technologies.3 This dual dynamic – structural demand for critical minerals on the one hand and a constrained, more disciplined fossil fuel sector on the other – underpins the conviction that a supercycle is forming across parts of the commodity complex.

The idea of commodity supercycles predates the current climate transition and has been shaped by several generations of theorists and empirical researchers. In the mid-20th century, economists such as Raúl Prebisch and Hans Singer first highlighted the long-term terms-of-trade challenges faced by commodity exporters, noting that prices for primary products tended to fall relative to manufactured goods over time. Their work prompted an early focus on structural forces in commodity markets, although it emphasised long-run decline rather than extended booms.

Later, analysts began to examine multi-decade patterns of rising and falling prices. Structural models of commodity prices observed that at major stages of economic development – such as the agricultural and industrial revolutions – commodity intensity tends to increase markedly, creating conditions for supercycles.4 These models distinguish between business cycles of a few years, investment cycles spanning roughly a decade, and longer supercycle components that can extend beyond 20 years.4 The supercycle lens gained prominence as researchers studied the commodity surge associated with China’s breakneck urbanisation and industrialisation from the late 1990s to the late 2000s.

That China-driven episode became the archetype of a modern commodity supercycle: a powerful, sustained demand shock focused on energy, metals and bulk materials, amplified by long supply lead times and capital expenditure cycles. J.P. Morgan and other institutions have documented how this supercycle drove a 12-year uptrend in prices, culminating before the global financial crisis, followed by a comparably long down-cycle as supply eventually caught up and Chinese growth shifted to a less resource-intensive model.2,4

Academic and market theorists have since refined the concept. They argue that supercycles emerge when three elements coincide. First, there must be a structural, synchronised increase in demand, often tied to a global development episode or technological shift. Second, supply in key commodities must be constrained by geology, capital discipline, regulation or long project lead times. Third, macro-financial conditions – including real interest rates, inflation expectations and currency trends – must align to support investment flows into real assets. The question for today’s transition is whether decarbonisation meets these criteria.

On the demand side, the clean tech revolution clearly resembles previous development stages in its resource intensity. J.P. Morgan notes that electric vehicles require significantly more minerals than internal combustion engine cars – roughly six times as much in aggregate when accounting for lithium, nickel, cobalt, manganese and graphite.1 Similarly, building solar and wind capacity, and the vast grid infrastructure to connect them, calls for much more copper and aluminium per unit of capacity than conventional power systems.1 The International Energy Agency’s projections, which J.P. Morgan draws on, indicate that even under modest policy assumptions, renewable electricity capacity is set to increase by around 50% by 2030, with more ambitious net zero scenarios implying far steeper growth.1

Supply, however, has been shaped by a decade of caution. After the last supercycle ended, many mining and energy companies cut back capital expenditure, streamlined balance sheets and prioritised shareholder returns. Regulatory processes for new mines lengthened, environmental permitting became more stringent, and social expectations around land use and community impacts increased. The result is that bringing new supplies of copper, nickel or lithium online can take many years and substantial capital, creating a lag between price signals and physical supply.

Theorists of the investment cycle – often identified with work on 8 to 20-year intermediate commodity cycles – argue that such periods of underinvestment sow the seeds for the next up-cycle.4 When demand resurges due to a structural driver, constrained supply leads to persistent price pressures until investment, technology and substitution can rebalance the market. In the case of the energy transition, the requirement for large amounts of specific minerals, combined with concentrated supply in a small number of countries, intensifies this effect and introduces geopolitical considerations.

Another important strand of thought concerns the evolution of energy systems themselves. Analysts focusing on energy supercycles emphasise that transitions historically unfold over multiple decades and rarely proceed smoothly.3,4 Even as clean energy capacity expands rapidly, global energy demand continues to grow, and existing systems must meet rising consumption while new infrastructure is built. J.P. Morgan’s energy research describes this as a multi-decade process of “generating and distributing the joules” required to both satisfy demand and progressively decarbonise.3 During this period, traditional energy sources often remain critical, creating complex price dynamics across oil, gas, coal and renewables-linked commodities.

Within this broader theoretical frame, the clean technology transition can be seen as a distinctive supercycle candidate. Unlike the China wave, which centred on industrialisation and urbanisation within one country, the net zero agenda is globally coordinated and policy-driven. It spans power generation, transport, buildings, industry and agriculture, and requires both new physical assets and digital infrastructure. Structural models referenced by J.P. Morgan note that such system-wide investment programmes have historically been associated with sustained periods of elevated commodity intensity.4

At the same time, there is active debate among economists and market strategists about the durability and breadth of any new supercycle. Some caution that efficiency gains, recycling and substitution could cap demand growth in certain minerals over time. Others point to innovation in battery chemistries, alternative materials and manufacturing methods that may reduce reliance on some critical inputs. Still others argue that policy uncertainty and potential fragmentation in global trade could disrupt smooth investment and demand trajectories. Theorists of supercycles emphasise that these are not immutable laws but emergent patterns that can be shaped by technology, politics and finance.

J.P. Morgan’s perspective in the quoted insight acknowledges these uncertainties while underscoring the asymmetry in the coming decade. Even in conservative scenarios, their work suggests that demand for critical minerals rises substantially relative to recent history.1 Under more ambitious climate policies, the increase is far greater, and tightness in markets such as copper, nickel, cobalt and lithium appears likely, especially towards the end of the 2020s.1 Against this backdrop, natural resource companies with high-quality assets, disciplined capital allocation and credible sustainability strategies are positioned not as relics of the past, but as essential partners in delivering the energy transition.

This reframing has important implications for investors and corporates alike. For investors, it suggests that the traditional division between “old” resource-heavy industries and “new” clean tech sectors is too simplistic. The hardware of decarbonisation – from EV batteries and charging networks to grid-scale storage, wind turbines and solar farms – depends on a complex upstream ecosystem of miners, processors and materials specialists. For corporates, it highlights the strategic premium on securing access to critical inputs, managing long-term supply contracts, and integrating sustainability into resource development.

The quote from J.P. Morgan thus sits at the confluence of three intellectual streams: long-run theories of commodity supercycles, modern analysis of energy transition dynamics, and evolving views of how natural resource businesses fit into a low-carbon world. It encapsulates the idea that the path to net zero is not dematerialised; instead, it is anchored in physical assets, industrial capabilities and supply chains that must be financed, built and operated over many years. For those able to navigate this terrain – and for the theorists tracing its contours – the clean technology transition is not only an environmental imperative but also a defining economic narrative of the coming decades.

References

1. https://am.jpmorgan.com/hk/en/asset-management/adv/insights/market-insights/market-bulletins/clean-energy-investment/

2. https://www.foxbusiness.com/markets/biden-climate-change-fight-commodities-supercycle

3. https://www.jpmorgan.com/insights/global-research/commodities/energy-supercycle

4. https://www.jpmcc-gcard.com/digest-uploads/2021-summer/Page%2074_79%20GCARD%20Summer%202021%20Jerrett%20042021.pdf

5. https://am.jpmorgan.com/us/en/asset-management/institutional/card-list-libraries/sustainable-insights-climate-tab-us/

6. https://www.jpmorgan.com/insights/global-research/outlook/market-outlook

7. https://www.bscapitalmarkets.com/hungry-for-commodities-ndash-is-a-new-commodity-super-cycle-here.html

"We believe the clean technology transition is igniting a new supercycle in critical commodities, with natural resource companies emerging as winners." - Quote: J.P. Morgan

read more

Download brochure

Introduction brochure

What we do, case studies and profiles of some of our amazing team.

Download

Our latest podcasts on Spotify

Sign up for our newsletters - free

Global Advisors | Quantified Strategy Consulting