Select Page

Global Advisors | Quantified Strategy Consulting

Link from bio
Quote: Council on Foreign Relations – Leapfrogging China’s Critical Minerals Dominance

Quote: Council on Foreign Relations – Leapfrogging China’s Critical Minerals Dominance

“Artificial intelligence (AI) is now an integral part of new chemistry development and is set to supercharge the future of material engineering and reduce the time to discover, test, and deploy new materials and designs.” – Council on Foreign Relations – Leapfrogging China’s Critical Minerals Dominance

This statement from the influential report Leapfrogging China’s Critical Minerals Dominance: How Innovation Can Secure U.S. Supply Chains, published by the Council on Foreign Relations (CFR) and Silverado Policy Accelerator, underscores a pivotal shift in global resource strategy.1,3,4 Released on 5 February 2026, the report argues that the United States cannot compete with China through conventional mining and processing alone, given Beijing’s decades-long entrenchment across the critical minerals ecosystem-from extraction to magnet manufacturing.1,2 Instead, it advocates ‘leapfrogging’ via disruptive technologies, with artificial intelligence (AI) positioned as a transformative force in accelerating materials discovery and engineering.1,4

Context of the Quote and Geopolitical Stakes

Critical minerals-such as rare-earth elements (REEs), lithium, cobalt, and nickel-are indispensable for advanced technologies, including electric vehicles, renewable energy systems, defence equipment, and semiconductors.1,5 China dominates this sector, controlling over 90% of heavy REE processing and nearly all permanent magnet production, creating strategic chokepoints that it has weaponised through export controls since 2023.1 In October 2025, Beijing expanded restrictions on REEs and related technologies, nearly halting global supply chains and exposing U.S. vulnerabilities.1

The report emerges amid escalating U.S.-China tensions under the second Trump administration, where retaliatory tariffs and bans on semiconductor inputs like gallium and germanium have intensified.1 Traditional responses, such as expanding domestic mining, face insurmountable hurdles: multi-year permitting, billions in upfront costs, environmental concerns, and China’s unmatched scale.1,2 The quote highlights AI’s potential to bypass these by supercharging chemistry and materials engineering, slashing discovery-to-deployment timelines from decades to years.1

Authors and Their Expertise

The quote originates from a report co-authored by two leading experts in geoeconomics and supply chain policy.

  • Heidi Crebo-Rediker, Senior Fellow for Geoeconomics at CFR and a member of Silverado’s Strategic Council, brings deep experience from her time as U.S. State Department Chief Economist (2014-2017) and roles at Goldman Sachs and the National Economic Council. Her work focuses on financial sanctions, economic statecraft, and resilient supply chains.3,4
  • Mahnaz Khan, Vice President of Policy for Critical Supply Chains at Silverado Policy Accelerator, specialises in frontier technologies and mineral security. Silverado, a non-partisan think tank, drives innovation in national security challenges, and Khan’s contributions emphasise pragmatic financing and allied cooperation to scale breakthroughs.3,4

Endorsed by CFR’s Shannon O’Neil, Senior Vice President of Studies, the report calls for embedding innovation-including AI-driven materials engineering-into U.S. policy, alongside waste recovery, substitute materials, and international frameworks like the Forum on Resource Geostrategic Engagement (FORGE).2,4

Leading Theorists in AI-Driven Materials Science and Critical Minerals

The report’s vision aligns with pioneering work at the intersection of AI, chemistry, and materials engineering, where theorists and researchers are revolutionising discovery processes.

  • Alán Aspuru-Guzik (University of Toronto) is a trailblazer in AI for molecular discovery. His Molecular Space Exploration Engine (MOSE) and A-Lab-a fully autonomous lab-use reinforcement learning and generative models to design and synthesise novel materials, such as battery electrolytes, in weeks rather than years. Aspuru-Guzik’s ‘materials genome’ approach treats chemical space as a vast data landscape for AI navigation, directly supporting faster REE substitutes and magnet alternatives.1
  • Roald Hoffmann (Nobel Laureate in Chemistry, 1981), though not an AI specialist, laid foundational theories in extended Hückel molecular orbital methods, enabling computational simulations that AI now accelerates. His work on chemical bonding informs AI models predicting material properties under extreme conditions, vital for critical minerals applications.
  • Andrea Goldsmith (Stanford) and collaborators in AI-optimised catalysis advance sustainable extraction from tailings and waste-key report recommendations. Their models integrate machine learning with quantum chemistry to design enzymes and photocatalysts for REE recovery, reducing environmental impact.1
  • Jeremy Keith (EPFL) leads in generative AI for inorganic materials, developing models like M3GNet that predict properties across millions of crystal structures. This underpins high-throughput screening for rare-earth-free magnets, addressing China’s heavy REE monopoly.1

These theorists converge on a paradigm where AI acts as an ‘oracle’ for inverse design: specifying desired properties (e.g., magnet strength without dysprosium) and generating viable compounds. Combined with robotic labs and quantum computing, this could cut development times by 90%, aligning precisely with the report’s leapfrogging imperative.1,4

Implications for Materials Engineering

AI’s integration promises not just speed but resilience: engineering alloys resilient to supply shocks, recycling magnets from e-waste at scale, and bioleaching minerals from industrial byproducts.1 U.S. investments, like the $1.4 billion in rare-earth magnet recycling (November 2025), exemplify this shift, targeting firms like MP Materials and ReElement Technologies.1 By prioritising innovation over replication, the West can forge secure supply chains, diminishing China’s leverage and powering the next industrial era.

References

1. https://www.cfr.org/reports/leapfrogging-chinas-critical-minerals-dominance

2. https://www.cfr.org/articles/u-s-allies-aim-to-break-chinas-critical-minerals-dominance

3. https://www.silverado.org/publications/silverado-and-the-council-on-foreign-relations-release-new-report/

4. https://www.cfr.org/articles/new-cfr-report-outlines-how-the-u-s-can-leapfrog-chinas-critical-minerals-dominance

5. https://www.cfr.org

6. https://www.cfr.org/report/enter-dragon-and-elephant

7. https://podcasts.apple.com/us/podcast/this-is-how-the-us-can-become-a-player-in-rare-earth-metals/id1056200096?i=1000748342100

"Artificial intelligence (AI) is now an integral part of new chemistry development and is set to supercharge the future of material engineering and reduce the time to discover, test, and deploy new materials and designs." - Quote: Council on Foreign Relations - Leapfrogging China’s Critical Minerals Dominance

read more
Term: Lean in to the moment

Term: Lean in to the moment

“To ‘lean into the moment’ means to engage fully with the present experience, situation, or task, rather than avoiding it or being distracted. It implies a willingness to be present, observant and responsive, especially when the situation might be uncomfortable or challenging.” – Lean in to the moment

To lean into the moment means to engage fully with the present experience, situation, or task, rather than avoiding it or being distracted. It implies a willingness to be present, observant, and responsive, especially when the situation might be uncomfortable or challenging. This phrase draws from the broader idiom ‘lean into’, which signifies embracing or committing to something with determination, often in the face of uncertainty or difficulty.

The expression encourages owning the current reality, casting off concerns, and moving forward with confidence. For instance, it can involve pursuing a task with great effort and perseverance, accepting potentially negative traits to turn them positive, or persevering despite risk. In creative or professional contexts, it means embracing uncertainty to foster growth, as seen in teaching scenarios where one confronts fear head-on.

Origins and Evolution of the Phrase

The phrasal verb ‘lean into’ emerged in the mid-20th century in the US, meaning to embrace or commit fully. Early examples include a 1941 citation from Princeton Alumni Weekly: ‘Kent Cooper is leaning into it at Columbia Business.’ By the 21st century, ‘lean in’ (a related form) gained prominence, defined as persevering amid difficulty, and was popularised by Sheryl Sandberg’s 2013 book Lean In, urging women to pursue leadership.

In mindfulness contexts, ‘lean into the moment’ aligns with practices of full presence, transforming challenges into opportunities for empowerment and clarity.

Key Theorist: Jon Kabat-Zinn and Mindfulness-Based Stress Reduction

The most relevant strategy theorist linked to ‘leaning into the moment’ is **Jon Kabat-Zinn**, a pioneer of mindfulness in modern psychology and stress management. His work embodies the concept through teachings on non-judgmental awareness of the present, even in discomfort.

Biography: Born in 1944 in New York City to a mathematician father (Elia Markenson) and a scientific illustrator mother (Sally Kabat-Dorfman), Kabat-Zinn earned a PhD in molecular biology from MIT in 1971. Initially focused on scientific research, a profound meditation experience shifted his path. In 1979, he founded the Mindfulness-Based Stress Reduction (MBSR) programme at the University of Massachusetts Medical Center, adapting ancient Buddhist practices into secular, evidence-based interventions for chronic pain and stress.

Relationship to the Term: Kabat-Zinn’s philosophy directly mirrors ‘leaning into the moment’. In MBSR, he teaches ‘leaning into’ sensations of pain or anxiety without resistance, using phrases like ‘being with’ or ‘allowing’ the experience fully. His seminal book Full Catastrophe Living (1990) instructs participants to ‘lean into the sharp point’ of discomfort, fostering presence and responsiveness. This approach has influenced corporate strategy, leadership training, and resilience-building, where executives ‘lean into’ uncertainty much like Kabat-Zinn’s patients embrace challenging moments. His work underpins global mindfulness initiatives, with over 700 MBSR clinics worldwide by the 2020s.

Kabat-Zinn’s integration of mindfulness into strategy emphasises observable benefits: reduced reactivity, enhanced focus, and adaptive decision-making in volatile environments.

References

1. https://www.webclique.net/lean-into-it/

2. https://idioms.thefreedictionary.com/lean+into+(someone+or+something)

3. https://www.merriam-webster.com/dictionary/lean%20in

4. https://grammarphobia.com/blog/2024/08/lean-into.html

"To 'lean into the moment' means to engage fully with the present experience, situation, or task, rather than avoiding it or being distracted. It implies a willingness to be present, observant and responsive, especially when the situation might be uncomfortable or challenging." - Term: Lean in to the moment

read more
Term: Thought experiment

Term: Thought experiment

“A thought experiment (also known by the German term Gedankenexperiment) is a hypothetical scenario imagined to explore the consequences of a theory, principle, or idea when a real-world physical experiment is impossible, unethical, or impractical.” – Thought experiment

A **thought experiment**, known in German as Gedankenexperiment, is a hypothetical scenario imagined to explore the consequences of a theory, principle, or idea when conducting a real-world physical experiment is impossible, unethical, or impractical1,7. It involves using hypotheticals to logically reason out solutions to difficult questions, often simulating experimental processes through imagination alone1. These mental exercises are employed across disciplines, particularly philosophy and theoretical sciences, for purposes such as education, conceptual analysis, exploration, hypothesising, theory selection, and implementation2,7.

Thought experiments challenge beliefs, offer fresh perspectives, and examine abstract concepts imaginatively without real-world repercussions3. They construct extreme situations to reveal insights unavailable through formal logic or abstract reasoning, by generating mental models of scenarios and manipulating them via simulation2. Though sometimes circular or rhetorical to emphasise a point, they provide epistemic access to features of representations beyond propositional logic1,2.

Famous Examples

  • Mary’s Room (Frank Jackson, 1982): A scientist, Mary, knows everything about colour physically from a black-and-white room but learns something new upon seeing red, questioning qualia and physicalism2,3,5.
  • Chinese Room (John Searle, 1980s): A person follows rules to manipulate Chinese symbols without understanding them, arguing computers simulate but do not comprehend meaning2,4.
  • Drowning Child (Peter Singer, 2009): Would you save a drowning child if it ruined your shoes? This highlights obligations to aid distant strangers2,3.
  • Trolley Problem: Divert a trolley to kill one instead of five? Variations probe ethics of action vs. inaction6.
  • Brain in a Vat: Your brain in a vat fed simulated experiences questions reality and knowledge4.

Best Related Strategy Theorist: Erwin Schrödinger

Among theorists linked to thought experiments, **Erwin Schrödinger** stands out for his iconic contribution in quantum mechanics, with a profound backstory tying his work to strategic scientific reasoning.

Born in 1887 in Vienna, Austria, Schrödinger was a physicist whose diverse interests spanned philosophy, biology, and Eastern mysticism. He studied at the University of Vienna, served in World War I, and held professorships in Zurich, Berlin (succeeding Planck), Oxford, Graz, and Dublin. Awarded the 1933 Nobel Prize in Physics (shared with Paul Dirac) for wave mechanics, he fled Nazi Germany in 1933 due to his opposition to antisemitism, despite his own complex personal life7. Schrödinger’s polymath nature influenced his interdisciplinary approach, later extending to genetics via his 1944 book What is Life?, inspiring DNA discoverers Watson and Crick.

His relationship to the thought experiment is epitomised by **Schrödinger’s Cat** (1935), devised to critique the Copenhagen interpretation of quantum mechanics. Imagine a cat in a sealed box with a radioactive atom: if it decays (50% chance), poison releases, killing the cat. Quantum superposition implies the cat is simultaneously alive and dead until observed-a paradoxical Gedankenexperiment highlighting measurement problems and the absurdity of applying quantum rules macroscopically1,7. This strategic tool exposed flaws in prevailing theories, spurring debates on wave function collapse, many-worlds interpretation, and quantum reality. Schrödinger used it not to endorse but to provoke clearer strategies for quantum theory, cementing thought experiments’ role in scientific strategy7.

References

1. https://thedecisionlab.com/reference-guide/neuroscience/thought-experiments

2. https://www.missiontolearn.com/thought-experiments/

3. https://bigthink.com/personal-growth/seven-thought-experiments-thatll-make-you-question-everything/

4. https://www.toptenz.net/top-10-most-famous-thought-experiments.php

5. https://adarshbadri.me/philosophy/philosophical-thought-experiments/

6. https://guides.gccaz.edu/philosophy-guide/experiments

7. https://plato.stanford.edu/entries/thought-experiment/

8. https://miamioh.edu/howe-center/hwac/disciplinary-writing-guides/philosophy/thought-experiments.html

"A thought experiment (also known by the German term Gedankenexperiment) is a hypothetical scenario imagined to explore the consequences of a theory, principle, or idea when a real-world physical experiment is impossible, unethical, or impractical." - Term: Thought experiment

read more
Quote: Bill Gurley – GP at Benchmark

Quote: Bill Gurley – GP at Benchmark

“AI is leverage because it can scale cognition. It can scale certain kinds of thinking and writing and analysis. And that means individuals can do more. Small teams can do more. It changes the power dynamics.” – Bill Gurley – GP at Benchmark

Bill Gurley: The Visionary Venture Capitalist

Bill Gurley serves as a General Partner at Benchmark, one of Silicon Valley’s most prestigious venture capital firms. Renowned for his prescient investments in transformative companies such as Uber, Airbnb, and Zillow, Gurley has a track record of identifying technologies that reshape industries and power structures1,4,7. His perspective on artificial intelligence (AI) stems from deep engagement with the sector, including discussions on scaling laws, model sizes, and inference costs in podcasts like BG2 with Brad Gerstner1,2. In the quoted interview with Tim Ferriss, Gurley articulates how AI acts as a force multiplier, enabling individuals and small teams to achieve outsized impact by scaling cognitive tasks traditionally limited by human capacity7.

Context of the Quote

The quote originates from a conversation hosted by Tim Ferriss, where Gurley explores AI’s role in the modern economy. He emphasises that AI scales cognition – encompassing thinking, writing, and analysis – thereby democratising high-level intellectual work. This shift empowers solo entrepreneurs and lean teams, disrupting traditional power dynamics dominated by large organisations with vast resources7. Gurley’s views align with his broader commentary on AI’s rapid evolution, including the implications of massive compute clusters by leaders like Elon Musk, OpenAI, and Meta, and the surprising efficiency of smaller models trained beyond conventional limits1. He highlights real-world applications, such as inference costs outweighing training in products like Amazon’s Alexa, underscoring AI’s scalability for practical deployment1.

Backstory on Leading Theorists in AI Scaling and Leverage

Gurley’s idea of AI as leverage builds on foundational theories in AI scaling laws and cognitive amplification. Key figures include:

  • Sam Altman (OpenAI CEO): Altman has championed scaling massive models, predicting that AI will handle every cognitive task humans perform within 3-4 years, unlocking trillions in value from replaced human labour2. Discussions with Gurley reference OpenAI’s ongoing training of 405 billion parameter models1.
  • Elon Musk: Musk forecasts AI surpassing human cognition across all tasks imminently, driving investments in enormous compute clusters for training and inference scaling by factors of a million or billion1,2.
  • Mark Zuckerberg (Meta): Zuckerberg revealed Meta’s Llama models, including an 8 billion and 70 billion parameter version, trained past the ‘Chinchilla point’ – a theoretical diminishing returns threshold from a Google paper – to pack superior intelligence into smaller sizes with fixed datasets1. This supports Gurley’s thesis on efficient scaling for broader access.
  • Chinchilla Scaling Law Authors (Google DeepMind): Their seminal paper defined optimal data-to-model size ratios for pre-training, challenging earlier assumptions and influencing debates on whether bigger always means better1. Meta’s breakthroughs by exceeding this point validate continued gains from extended training.
  • Satya Nadella and Jensen Huang: Microsoft and Nvidia leaders emphasise inference scaling, with Nadella noting compute demands exploding as models handle complex reasoning chains, aligning with Gurley’s power shift to agile users2.

These theorists collectively underpin Gurley’s observation: AI’s ability to scale cognition via compute, data, and innovative training redefines leverage, favouring nimble players over bureaucratic giants1,2,3. Gurley’s real-world examples, like a 28-year-old entrepreneur superpowered by AI for site selection, illustrate this in action across regions including China3.

Implications for Power Dynamics

Gurley’s quote signals a paradigm shift akin to an ‘Industrial Revolution for intelligence production’, where inference compute scales exponentially, enabling small entities to rival incumbents1,2. Venture trends, such as mega-funds writing huge cheques to AI startups, reflect this frenzy, blurring early and late-stage investing5. Yet Gurley cautions staying ‘far from the edge’, advocating focus on core innovations amid hype4.

References

1. https://www.youtube.com/watch?v=iTwZzUApGkA

2. https://www.youtube.com/watch?v=yPD1qEbeyac

3. https://www.podchemy.com/notes/840-bill-gurley-investing-in-the-ai-era-10-days-in-china-and-important-life-lessons-from-bob-dylan-jerry-seinfeld-mrbeast-and-more-06a5cd0f-d113-5200-bbc0-e9f57705fc2c

4. https://www.youtube.com/watch?v=D0230eZsRFw

5. https://orbanalytics.substack.com/p/the-new-normal-bill-gurley-breaks

6. https://podcasts.apple.com/ca/podcast/ep20-ai-scaling-laws-doge-fsd-13-trump-markets-bg2/id1727278168?i=1000677811828

7. https://tim.blog/2025/12/17/bill-gurley-running-down-a-dream/

"AI is leverage because it can scale cognition. It can scale certain kinds of thinking and writing and analysis. And that means individuals can do more. Small teams can do more. It changes the power dynamics." - Quote: Bill Gurley

read more
Quote: Johan van Jaarsveld – BHP Chief Technical Officer

Quote: Johan van Jaarsveld – BHP Chief Technical Officer

“AI is no longer a future concept for BHP. It is increasingly part of how we run our operations. Our focus is on applying it in practical, governed ways that support our teams in achieving safer, more productive and more reliable outcomes.” – Johan van Jaarsveld – BHP Chief Technical Officer

In a landmark statement on 30 January 2026, Johan van Jaarsveld, BHP’s Chief Technical Officer, encapsulated the company’s bold shift towards embedding artificial intelligence into its core operations. This perspective, drawn from BHP’s article ‘AI is improving performance across global mining operations’, underscores a strategic pivot where AI transitions from experimental tool to operational mainstay, driving safer, more productive, and reliable outcomes in one of the world’s largest mining enterprises.1,5

Who is Johan van Jaarsveld?

Johan van Jaarsveld assumed the role of Chief Technical Officer at BHP effective 1 March 2024, bringing over 25 years of expertise spanning resources, finance, and technology across continents including Asia, Canada, Australia, and South Africa.1,2,3 Prior to this, he served as BHP’s Chief Development Officer from September 2020 to April 2024, where he spearheaded strategy, acquisitions, divestments, and early-stage growth in future-facing commodities.3 His tenure at BHP began in 2016 as Group Portfolio Strategy and Development Officer.

Before joining BHP, van Jaarsveld held senior executive positions at global giants: Senior Vice President of Business Development at Barrick Gold Corporation in Toronto (2015-2016), Managing Director at Goldman Sachs in Hong Kong (2011-2014), Managing Director at The Blackstone Group in Hong Kong (2008-2011), and Vice President at Lehman Brothers (2007).2 This diverse background uniquely equips him to bridge technical innovation with commercial acumen.

Academically, van Jaarsveld holds a PhD in Engineering (Extractive Metallurgy) from the University of Melbourne (2001), a Master of Commerce in Applied Finance from Melbourne Business School (2002), and a Bachelor of Engineering (Chemical) from Stellenbosch University, South Africa.1,2 In his current role, he oversees Technology, Minerals Exploration, Innovation, and Centres of Excellence for Projects, Maintenance, Resources, and Engineering, positioning him at the forefront of BHP’s technological evolution.1

The Context of the Quote: AI at BHP

Van Jaarsveld’s remarks reflect BHP’s accelerating adoption of AI, as detailed in early 2026 publications. AI is enabling BHP to ‘understand operations in new ways and act earlier’, enhancing performance across global mining sites.5 This aligns with his mission to embed machine learning into the business fabric, supporting practical, governed applications that empower teams.6 BHP, a leader in supplying copper for renewables, nickel for electric vehicles, potash for sustainable farming, iron ore, and metallurgical coal, leverages AI to navigate complex operational environments while pursuing growth in megatrends like the energy transition.2,3

The quote emerges amid BHP’s leadership refresh in December 2023, where van Jaarsveld’s appointment was hailed by CEO Mike Henry as bolstering capacity for safe, reliable performance and stakeholder engagement.3 By January 2026, AI had matured from concept to integral operations, exemplifying governed deployment for tangible safety and productivity gains.1,5

Leading Theorists and Evolution of AI in Mining

The integration of AI in mining draws from foundational theories in artificial intelligence, machine learning, and operational optimisation, pioneered by key figures whose work underpins industrial applications.

  • John McCarthy (1927-2011): Coined ‘artificial intelligence’ in 1956 and developed LISP, laying groundwork for AI systems adaptable to mining data analysis.[No specific search result; general knowledge of AI history.]
  • Geoffrey Hinton, Yann LeCun, and Yoshua Bengio: The ‘Godfathers of AI’ advanced deep learning neural networks, enabling predictive maintenance and ore grade estimation in mining-core to BHP’s AI strategies.[No specific search result; general knowledge.]
  • Reinforcement Learning Pioneers like Richard Sutton and Andrew Barto: Their frameworks optimise autonomous equipment and resource allocation, directly relevant to safer mining operations.[No specific search result; general knowledge.]

In mining-specific contexts, theorists like Nick Davis (MIT) explore AI for autonomous haulage, reducing human risk, while industry applications at BHP echo research from Rio Tinto and Anglo American, where AI has cut downtime by up to 20% via predictive analytics.[Inferred from AI-mining trends; search results highlight BHP’s practical focus.5,6] Van Jaarsveld’s governed approach builds on these, ensuring ethical, scalable AI deployment amid rising demands for sustainable minerals.

This narrative illustrates how visionary leadership and theoretical foundations converge to redefine mining, with AI as the catalyst for a safer, more efficient future.

References

1. https://www.bhp.com/about/board-and-management/johan-van-jaarsveld

2. https://cio-sa.co.za/profiles/johan-van-jaarsveld/

3. https://www.bhp.com/es/news/media-centre/releases/2023/12/executive-leadership-team-update

4. https://www.marketscreener.com/insider/JOHAN-VAN-JAARSVELD-A1Y5XA/

5. https://im-mining.com/2026/01/30/ai-helping-bhp-understand-operations-in-new-ways-and-act-earlier-van-jaarsveld-says/

6. https://www.miningmagazine.com/technology/news-analysis/4414802/bhp-faith-ai

7. https://www.bhp.com/about/board-and-management

"“AI is no longer a future concept for BHP. It is increasingly part of how we run our operations. Our focus is on applying it in practical, governed ways that support our teams in achieving safer, more productive and more reliable outcomes.” - Quote: Johan van Jaarsveld - BHP Chief Technical Officer

read more
Term: Abundance

Term: Abundance

“Abundance is defined as a state where essential resources – such as housing, energy, healthcare, and transportation – are made flourishing, affordable, and universally accessible through an intentional focus on increasing supply.” – Abundance

Abundance is defined as a state where essential resources – such as housing, energy, healthcare, and transportation – are made flourishing, affordable, and universally accessible through an intentional focus on increasing supply.1,2

Comprehensive Definition and Context

The concept of abundance represents a paradigm shift in political and economic thinking, advocating a ‘politics of plenty’ that prioritises building and innovation over scarcity-driven approaches. Coined prominently in the 2025 book Abundance by Ezra Klein and Derek Thompson, it critiques how past regulations – intended to solve 1970s problems – now hinder progress in the 2020s by blocking urban density, green energy, and infrastructure projects.2,4

At its core, abundance calls for liberalism that not only protects but actively builds. It argues that modern crises stem from insufficient supply rather than mere distribution failures. Solutions involve streamlining regulations, boosting innovation in areas like clean energy, housing, and biotechnology, and fostering high-density economic hubs to enhance idea generation and mobility.1,2 This contrasts with traditional scarcity mindsets, where progressives fear growth and conservatives resist government intervention, trapping societies in unaffordability.4

Key pillars include:

  • Housing: Permitting high-rise developments in vital cities without undue barriers to increase supply and affordability.1
  • Energy and Infrastructure: Accelerating clean energy and transport projects to meet demands sustainably.2
  • Healthcare and Innovation: Expanding medical residencies, drug approvals, and R&D while balancing equity with supply growth – a ‘floor without a ceiling’ model, as seen in France.1
  • Governance Reform: Reducing legalistic processes that prioritise procedure over outcomes.7

Critics note it de-emphasises redistribution in favour of supply-side innovation, potentially overlooking power dynamics, though proponents see it as a path beyond socialist left and populist right extremes.3,4,5

Key Theorist: Ezra Klein

Ezra Klein is the pre-eminent theorist behind the abundance agenda, co-authoring the seminal book Abundance with Derek Thompson. A leading liberal thinker, Klein shifted focus from political polarisation to economic abundance, arguing it offers a unifying path forward.1,2

Born in 1984 in Irvine, California, Klein rose through blogging on Wonkblog at The Washington Post, analysing policy with data-driven rigour. He co-founded Vox in 2014 as editor-in-chief, building it into a platform for explanatory journalism. In 2021, he launched The Ezra Klein Show podcast and joined The New York Times as a columnist, influencing discourse on liberalism’s failures.1,2

Klein’s relationship to abundance stems from observing how liberal governance stagnated: over-regulation stifles building, exacerbating shortages in housing and energy. In conversations, like with Tyler Cowen, he defends scaling elite institutions (e.g., doubling Harvard’s size) and critiques demand-side fixes without supply increases.1 His classically liberal view of power – checking arbitrary domination – underpins abundance as a corrective to equity-obsessed policies that neglect production.3 Klein positions it as reclaiming progressivism’s building ethos, countering both left-wing caution and right-wing anti-statism.2,4

Through Abundance, Klein provides intellectual firepower for a ‘liberalism that builds’, impacting policymakers and coalitions seeking tangible solutions.6,7

References

1. https://conversationswithtyler.com/episodes/ezra-klein-3/

2. https://www.simonandschuster.com/books/Abundance/Ezra-Klein/9781668023488

3. https://www.peoplespolicyproject.org/2025/06/09/abundance-has-a-theory-of-power/

4. https://en.wikipedia.org/wiki/Abundance_(Klein_and_Thompson_book)

5. https://www.bostonreview.net/articles/the-real-path-to-abundance/

6. https://www.inclusiveabundance.org/abundance-in-action/published-work/abundance-a-primer

7. https://www.eesi.org/articles/view/abundance-and-its-insights-for-policymakers

"Abundance is defined as a state where essential resources - such as housing, energy, healthcare, and transportation - are made flourishing, affordable, and universally accessible through an intentional focus on increasing supply." - Term: Abundance

read more
Quote: Max Planck – Nobel laureate

Quote: Max Planck – Nobel laureate

“I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness. Everything that we talk about, everything that we regard as existing, postulates consciousness.” – Max Planck – Nobel laureate

This striking statement, made by Max Planck in a 1931 interview with The Observer, encapsulates a radical departure from the materialist worldview dominant in physics at the time. Planck, the father of quantum theory, challenges the notion that matter is the foundation of existence, proposing instead that consciousness underpins all reality. Spoken amid the revolutionary upheavals of early quantum mechanics, the quote reflects his lifelong reconciliation of empirical science with metaphysical inquiry.1,2,3

Max Planck: Life, Legacy, and Philosophical Evolution

Born in 1858 in Kiel, Germany, Max Karl Ernst Ludwig Planck rose from a family of scholars to become one of the 20th century’s most influential physicists. He studied at the universities of Munich and Berlin, earning his doctorate in 1879. Initially drawn to thermodynamics, Planck’s pivotal moment came in 1900 when he introduced the concept of energy quanta to resolve the ‘ultraviolet catastrophe’ in black-body radiation-a breakthrough that birthed quantum theory. For this, he received the Nobel Prize in Physics in 1918.3

Planck’s career spanned turbulent times: he served as president of the Kaiser Wilhelm Society (later the Max Planck Society) and navigated the intellectual and political storms of two world wars. A devout Lutheran, he grappled with the implications of his discoveries, often emphasising the limits of scientific materialism. In works like Where Is Science Going? (1932), he argued that science presupposes an external world known only through consciousness, echoing themes in his famous quote.3,5

By 1931, at age 72, Planck was reflecting on quantum mechanics’ philosophical ramifications. The interview in The Observer captured his mature view: matter derives from consciousness, not vice versa. This idealist stance contrasted with contemporaries like Einstein, who favoured a deterministic universe, yet aligned with Planck’s belief in a ‘conscious and intelligent Mind’ as the force binding atomic particles.3,5

The Context of the Quote: Quantum Revolution and Metaphysical Stirrings

The quote emerged during a period of crisis in physics. Quantum mechanics, propelled by Planck’s quanta, Heisenberg’s uncertainty principle, and Schrödinger’s wave equation, shattered classical determinism. Reality at the subatomic level appeared probabilistic, observer-dependent-raising profound questions about observation’s role. Planck, who reluctantly accepted these implications, saw consciousness not as a quantum byproduct but as fundamental.4,5

In the interview, Planck addressed the ‘reality crisis’: if physical laws are mental constructs, what grounds existence? His response prioritised consciousness as the irreducible starting point, influencing later debates in quantum interpretation, such as the Copenhagen interpretation where measurement (tied to observation) collapses the wave function.3

Leading Theorists on Consciousness and Matter

Planck’s views resonate with a lineage of thinkers bridging physics, philosophy, and metaphysics. Here are key figures whose ideas shaped or paralleled his:

  • Immanuel Kant (1724-1804): The German philosopher posited that space, time, and causality are a priori structures of the mind, not properties of things-in-themselves. Planck echoed this by insisting we cannot ‘get behind consciousness’ to access unmediated reality.3
  • Ernst Mach (1838-1916): Planck’s early influence, Mach advocated ‘economical descriptions’ of phenomena, rejecting absolute space and atoms as metaphysical. His positivism nudged Planck towards quantum ideas but clashed with Planck’s later spiritual realism.5
  • Arthur Eddington (1882-1944): The British astrophysicist, like Planck, argued in The Nature of the Physical World (1928) that the mind constructs physical laws. He quipped, ‘We have found a strange footprint on the shores of the unknown,’ mirroring Planck’s consciousness primacy.5
  • Werner Heisenberg (1901-1976): Planck’s successor, Heisenberg’s uncertainty principle highlighted the observer’s role. Though more agnostic, he noted in Physics and Philosophy (1958) that quantum theory demands a ‘sharper formulation of the concept of reality,’ aligning with Planck’s critique.3
  • David Bohm (1917-1992): Later, Bohm developed implicate order theory, positing a holistic reality where consciousness and matter interpenetrate-directly inspired by Planck’s ‘matrix of all matter’ as a conscious mind.5

These theorists, from Kantian idealism to quantum pioneers, form the intellectual backdrop. Planck stands out for wedding rigorous physics with unapologetic metaphysics, suggesting science’s foundations rest on conscious postulate.1,3,5

Enduring Relevance

Planck’s declaration prefigures modern discussions in philosophy of mind, panpsychism, and quantum consciousness theories (e.g., by Roger Penrose and Stuart Hameroff). It invites reflection: if consciousness is fundamental, how does this reshape our understanding of the universe, free will, and even artificial intelligence? As Planck implied, all inquiry begins-and ends-with the mind.4,5

References

1. https://libquotes.com/max-planck/quote/lbm8d8r

2. https://www.quotescosmos.com/quotes/Max-Planck-quote-1.html

3. https://en.wikiquote.org/wiki/Max_Planck

4. https://bigthink.com/words-of-wisdom/max-planck-i-regard-consciousness-as-fundamental/

5. https://www.informationphilosopher.com/solutions/scientists/planck/

6. https://todayinsci.com/P/Planck_Max/PlanckMax-Quotations.htm

"I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness. Everything that we talk about, everything that we regard as existing, postulates consciousness." - Quote: Max Planck - Nobel laureate

read more
Term: Tokenisation

Term: Tokenisation

“Tokenisation is the process of converting sensitive data or real-world assets into non-sensitive, unique digital identifiers (tokens) for secure use, commonly seen in data security (replacing credit card numbers with tokens) or blockchain (representing assets like real estate as digital tokens).” – Tokenisation

Tokenisation is the process of replacing sensitive data or real-world assets with non-sensitive, unique digital identifiers called tokens. These tokens have no intrinsic value or meaning outside their specific context, ensuring security in data handling or asset representation on blockchain networks.

In data security, tokenisation substitutes sensitive information like credit card numbers with tokens stored in secure vaults, allowing safe processing without exposing originals. This meets standards such as PCI DSS, GDPR, and HIPAA, reducing breach risks as stolen tokens are useless without vault access.

In blockchain and crypto, it converts assets like real estate, artwork, or shares into digital tokens on a blockchain, enabling fractional ownership, trading, and custody while linking to the physical asset in secure facilities.

How Tokenisation Works

Typically involves three parties: the data/asset owner, an intermediary (e.g., merchant), and a secure vault provider. Sensitive data is sent to the vault, replaced by a unique token, and the original is discarded or stored securely. Tokens preserve data format and length for system compatibility, unlike encryption which alters them.

  • Vaulted Tokenisation: Original data stays in a central vault; tokens are de-tokenised only when needed within the vault.
  • Format-Preserving: Tokens match original data structure for seamless integration.
  • Blockchain Tokenisation: Assets are represented by tokens on networks like Ethereum, with compliance and custody mechanisms.

Benefits of Tokenisation

  • Enhanced security against breaches and insider threats.
  • Regulatory compliance with reduced audit scope.
  • Improved performance via smaller token sizes.
  • Data anonymisation for analytics and AI/ML.
  • Flexibility across cloud, on-premises, and hybrid setups.

Key Theorist: Don Tapscott

Don Tapscott, a pioneering strategist in digital economics and blockchain, is closely linked to asset tokenisation through his co-authorship of Blockchain Revolution (2016). With Alex Tapscott, he popularised the concept of tokenising real-world assets, arguing it democratises finance by enabling fractional ownership and liquidity for illiquid assets like property.

Born in 1947 in Canada, Tapscott began as a management consultant, authoring bestsellers like The Digital Economy (1995), which foresaw internet-driven business shifts. He founded the Tapscott Group and New Paradigm, advising firms and governments. His blockchain work critiques centralised finance, promoting decentralised ledgers for transparency. As Chair of the Blockchain Research Institute, he influences policy, with tokenisation central to his vision of a ‘token economy’ transforming global markets.

References

1. https://brave.com/glossary/tokenization/

2. https://entro.security/glossary/tokenization/

3. https://www.fortra.com/blog/what-data-tokenization-key-concepts-and-benefits

4. https://www.fortanix.com/faq/tokenization/data-tokenization

5. https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-tokenization

6. https://www.ibm.com/think/topics/tokenization

7. https://www.keyivr.com/us/knowledge/guides/guide-what-is-tokenization/

8. https://chain.link/education-hub/tokenization

"Tokenisation is the process of converting sensitive data or real-world assets into non-sensitive, unique digital identifiers (tokens) for secure use, commonly seen in data security (replacing credit card numbers with tokens) or blockchain (representing assets like real estate as digital tokens)." - Term: Tokenisation

read more
Quote: Nate B Jones

Quote: Nate B Jones

“The pleasant surprise is how much you can accomplish when you properly harness your agents, and how big companies are leaning in and able to actually get volume done on that basis.” – Nate B Jones – AI News & Strategy Daily

Context of the Quote

This quote from Nate B Jones captures a pivotal moment in the evolution of AI agents within enterprise settings. Delivered in his AI News & Strategy Daily series, it highlights the unexpected productivity gains when organisations implement AI agents correctly. Jones emphasises that major firms like JP Morgan and Walmart are already deploying these systems at scale, achieving high-volume outputs that traditional software cycles could not match1,2. The core insight is that proper orchestration-combining AI with human oversight-unlocks disproportionate value, countering the hype-driven delays many companies face.

Backstory on Nate B Jones

Nate B Jones is a leading voice in enterprise AI strategy, known for his pragmatic frameworks that guide businesses from AI hype to production deployment. Through his platform natebjones.com and Substack newsletter Nate’s Newsletter, he distils complex AI developments into actionable insights for executives1,2,7. Jones produces daily video briefings like AI News & Strategy Daily, where he analyses real-world use cases, warns against common pitfalls such as over-reliance on unproven models, and provides custom prompts for rapid agent prototyping2,4.

His work focuses on bridging the gap between AI potential and enterprise reality. For instance, he critiques the ‘human throttle’-where hesitation and risk aversion limit agent autonomy-and advocates for decision infrastructure like audit logs and reversible processes to build trust3. Jones has documented production AI agents at scale, urging leaders to act swiftly as competitors gain ‘durable advantage’ through accumulated institutional intelligence2. His library of use cases spans finance (e.g., JP Morgan’s choreographed workflows) to operations, emphasising that agents excel in ‘level four’ tasks: AI drafts, humans review, then AI proceeds1. By October 2025, his briefings were already forecasting 2026 as a year of job-by-job AI transformation5.

Leading Theorists and the Subject of AI Agents

AI agents-autonomous systems that perceive, reason, act, and learn to achieve goals-represent a shift from passive tools to proactive workflows. Nate B Jones builds on foundational work by key theorists:

  • Stuart Russell and Peter Norvig: Pioneers of modern AI, their textbook Artificial Intelligence: A Modern Approach defines rational agents as entities maximising expected utility in dynamic environments. This underpins Jones’s emphasis on structured autonomy over raw intelligence1,3.
  • Andrew Ng: Dubbed the ‘Godfather of AI,’ Ng popularised agentic workflows at Stanford and through Landing AI. He advocates ‘agentic reasoning,’ where AI chains tools and decisions, aligning with Jones’s production playbooks for enterprises like Walmart2.
  • Yohei Nakajima: Creator of BabyAGI (2023), an early open-source agent framework that demonstrated recursive task decomposition. This inspired Jones’s warnings against hype, stressing expert-designed workflows for complex problems1,4.
  • Anthropic Researchers: Their work on Constitutional AI and agent patterns (e.g., long-running memory) informs Jones’s analyses of scalable agents, as seen in his breakdowns of reliable architectures6.

Jones synthesises these ideas into enterprise strategy, arguing that agents are not future tech but ‘production infrastructure now.’ He counters delays by outlining six principles for quick builds (days or weeks), including context-aware prompts and risk-mitigated deployment2. This positions him as a practitioner-theorist, translating academic foundations into C-suite playbooks amid the 2025-2026 agent revolution.

Broader Implications for Workflows

Jones’s quote underscores a paradigm shift: AI agents amplify top human talent, making them ‘more fingertippy’ rather than replacing them1. Big companies succeed by ‘leaning in’-auditing processes, building observability, and iterating fast-yielding volume at scale. For leaders, the message is clear: harness agents properly, or risk irreversible competitive lag2,3.

References

1. https://www.youtube.com/watch?v=obqjIoKaqdM

2. https://natesnewsletter.substack.com/p/executive-briefing-your-2025-ai-agent

3. https://www.youtube.com/watch?v=7NjtPH8VMAU

4. https://www.youtube.com/watch?v=1FKxyPAJ2Ok

5. https://natesnewsletter.substack.com/p/2026-sneak-peek-the-first-job-by-9ac

6. https://www.youtube.com/watch?v=xNcEgqzlPqs

7. https://www.natebjones.com

"The pleasant surprise is how much you can accomplish when you properly harness your agents, and how big companies are leaning in and able to actually get volume done on that basis." - Quote: Nate B Jones

read more
Term: Stablecoin

Term: Stablecoin

“A stablecoin is a type of cryptocurrency designed to maintain a stable value, unlike volatile assets like Bitcoin, by pegging its price to a stable reserve asset, usually a fiat currency (like the USD) or a commodity (like gold).” – Stablecoin

What is a Stablecoin?

A **stablecoin** is a type of cryptocurrency engineered to preserve a consistent value relative to a specified asset, such as a fiat currency (e.g., the US dollar), a commodity (e.g., gold), or a basket of assets, in stark contrast to the high volatility of assets like Bitcoin.

Unlike traditional cryptocurrencies, stablecoins employ stabilisation mechanisms including reserve assets held by custodians or algorithmic protocols that adjust supply and demand to sustain the peg. Fiat-backed stablecoins, the most common variant, mirror money market funds by holding reserves in short-term assets like treasury bonds, commercial paper, or bank deposits. Commodity-backed stablecoins peg to physical assets like gold, while cryptocurrency-backed ones, such as DAI or Wrapped Bitcoin (WBTC), use overcollateralised crypto reserves managed via smart contracts on decentralised networks.

Types of Stablecoins

  • Fiat-backed: Centralised issuers hold equivalent fiat reserves (e.g., USD) to support 1:1 redeemability.
  • Commodity-backed: Pegged to commodities, with issuers maintaining physical reserves.
  • Cryptocurrency-backed: Collateralised by other cryptocurrencies, often overcollateralised to buffer volatility.
  • Algorithmic: Rely on smart contracts to dynamically adjust supply without full reserves, though prone to failure.

Despite the name, stablecoins are not immune to depegging, as evidenced by historical failures amid market stress or redemption pressures, potentially triggering systemic risks akin to fire-sale contagions in traditional finance. They facilitate rapid, low-cost blockchain transactions, serving as a bridge between fiat and crypto ecosystems for payments, settlements, and trading.

Regulatory Landscape

Governments worldwide are intensifying oversight due to stablecoins’ growing role in transactions. For instance, Nebraska’s Financial Innovation Act (2021, updated 2024) permits digital asset depositories to issue stablecoins backed by reserves in FDIC-insured institutions.

Key Theorist: Robert Shiller and the Conceptual Foundations

The most relevant strategy theorist linked to stablecoins is **Robert Shiller**, a Nobel Prize-winning economist whose pioneering work on financial stability, behavioural finance, and asset pricing underpins the economic rationale for pegged digital assets. Shiller’s theories address the volatility that stablecoins explicitly counter, positioning them as practical applications of stabilising speculative markets.

Born in 1946 in Detroit, Michigan, Shiller earned his PhD in economics from MIT in 1972 under advisor Robert Solow. He joined Yale University in 1982, where he remains the Sterling Professor of Economics. Shiller gained prominence for developing the Case-Shiller Home Price Index, a leading US housing market benchmark. His seminal book, Irrational Exuberance (2000), presciently warned of the dot-com bubble and later the 2008 financial crisis, critiquing how narratives drive asset bubbles.

Shiller’s relationship to stablecoins stems from his advocacy for financial innovations that mitigate volatility. In works like Finance and the Good Society (2012), he explores stabilising mechanisms such as index funds and derivatives, which parallel stablecoin pegs by tethering values to underlying assets. He has discussed cryptocurrencies in interviews and writings, noting their potential to enhance financial inclusion if stabilised-echoing stablecoins’ design to combine crypto’s efficiency with fiat-like reliability. Shiller’s CAPE (Cyclically Adjusted Price-to-Earnings) ratio exemplifies pegging metrics to long-term fundamentals, a concept mirrored in stablecoin reserves. While not a crypto native, his behavioural insights explain depegging risks from herd mentality, making him the foremost theorist for stablecoin strategy in volatile markets.

References

1. https://en.wikipedia.org/wiki/Stablecoin

2. https://csrc.nist.gov/glossary/term/stablecoin

3. https://www.fidelity.com/learning-center/trading-investing/what-is-a-stablecoin

4. https://www.imf.org/en/publications/fandd/issues/2022/09/basics-crypto-conservative-coins-bains-singh

5. https://klrd.gov/2024/11/15/stablecoin-overview/

6. https://am.jpmorgan.com/us/en/asset-management/adv/insights/market-insights/market-updates/on-the-minds-of-investors/what-is-a-stablecoin/

7. https://www.bankofengland.co.uk/explainers/what-are-stablecoins-and-how-do-they-work

8. https://bvnk.com/blog/stablecoins-vs-bitcoin

9. https://business.cornell.edu/article/2025/08/stablecoins/

"A stablecoin is a type of cryptocurrency designed to maintain a stable value, unlike volatile assets like Bitcoin, by pegging its price to a stable reserve asset, usually a fiat currency (like the USD) or a commodity (like gold)." - Term: Stablecoin

read more
Quote: Jim Simons – Renaissance Technologies founder

Quote: Jim Simons – Renaissance Technologies founder

“In this business it’s easy to confuse luck with brains.” – Jim Simons – Renaissance Technologies founder

Jim Simons: A Mathematical Outsider Who Conquered Markets

James Harris Simons (1938-2024), founder of Renaissance Technologies, encapsulated the perils of financial overconfidence with his incisive observation: “In this business it’s easy to confuse luck with brains.” This quote underscores a core tenet of quantitative investing: distinguishing genuine predictive signals from random noise in market data1,2,4.

Simons’ Extraordinary Backstory

Born in Brookline, Massachusetts, to a film industry salesman father and a shoe factory manager relative, Simons displayed early mathematical brilliance. He earned a bachelor’s degree from MIT at 20 and a PhD from UC Berkeley by 23, specialising in topology and geometry. His seminal work on the Chern-Simons theory earned him the American Mathematical Society’s Oswald Veblen Prize1,2,3.

Simons taught at MIT and Harvard but felt like an outsider in academia, pursuing side interests in trading soybean futures and launching a Colombian manufacturing venture1. At the Institute for Defense Analyses (IDA), he cracked Soviet codes during the Cold War, honing skills in pattern recognition and data analysis that later fuelled his financial models. Fired for opposing the Vietnam War, he chaired Stony Brook University’s mathematics department, building it into a world-class institution1,2,4.

By his forties, disillusioned with academic constraints and driven by a desire for control after financial setbacks, Simons entered finance. In 1978, he founded Monemetrics (renamed Renaissance Technologies in 1982) in a modest strip mall near Stony Brook. Rejecting Wall Street conventions, he hired mathematicians, physicists, and code-breakers-not MBAs-to exploit market inefficiencies via algorithms2,3,4.

Renaissance Technologies: The Quant Revolution

Renaissance pioneered quantitative trading, using statistical models to predict short-term price movements in stocks, commodities, and currencies. Key hires like Leonard E. Baum (creator of the Baum-Welch algorithm for hidden Markov models) and James Ax developed early systems. The Medallion Fund, launched in 1988, became legendary, averaging 66% annual returns before fees over three decades-vastly outperforming benchmarks2,4.

Simons capped Medallion at $10 billion, expelling outsiders by 2005 to preserve edge, while public funds lagged dramatically (e.g., Medallion gained 76% in 2020 amid public fund losses)4. His firm amassed terabytes of data, analysing factors from weather to sunspots, embodying machine learning precursors like pattern-matching across historical market environments4,5. Dubbed the “Quant King,” Simons ranked among the world’s richest at $31.8 billion, yet emphasised collaboration: “My management style has always been to find outstanding people and let them run with the ball”3. He retired as CEO in 2010, with Peter Brown and Robert Mercer succeeding him4.

Context of the Quote

The quote reflects Simons’ philosophy amid Renaissance’s secrecy and success. In an industry rife with survivorship bias-where winners attribute gains to genius while ignoring luck-Simons stressed rigorous statistical validation. His models sought non-random patterns, acknowledging markets’ inherent unpredictability. This humility contrasted with boastful peers, aligning with his outsider ethos and code-breaking rigour1,4.

Leading Theorists in Quantitative Finance and Prediction

  • Leonard E. Baum: Simons’ IDA colleague and Renaissance pioneer. Baum’s hidden Markov models, vital for speech recognition and early machine learning, adapted to forecast currency trades by modelling sequential market states2,4.
  • James Ax: Stony Brook mathematician who oversaw Baum’s work at Renaissance, advancing algebraic geometry applications to financial signals2,4.
  • Edward Thorp: Precursor quant who applied probability theory to blackjack and options pricing, influencing beat-the-market strategies (though not directly tied to Simons)4.
  • Harry Markowitz: Modern portfolio theory founder (1952), emphasising diversification and risk via mean-variance optimisation-foundational to quant risk models4.
  • Eugene Fama: Efficient Market Hypothesis (EMH) proponent, arguing prices reflect all information, challenging pure prediction but spurring anomaly hunts like Renaissance’s4.

Simons’ legacy endures through the Simons Foundation, funding maths and basic science, and Renaissance’s proof that data-driven science trumps intuition in finance3. His quote remains a sobering reminder in prediction’s high-stakes arena.

References

1. https://www.jermainebrown.org/posts/why-jim-simons-founded-renaissance-technologies

2. https://en.wikipedia.org/wiki/Jim_Simons

3. https://www.simonsfoundation.org/2024/05/10/remembering-the-life-and-careers-of-jim-simons/

4. https://fortune.com/2024/05/10/jim-simons-obituary-renaissance-technologies-quant-king/

5. https://www.youtube.com/watch?v=xkbdZb0UPac

6. https://stockcircle.com/portfolio/jim-simons

7. https://mitsloan.mit.edu/ideas-made-to-matter/quant-pioneer-james-simons-math-money-and-philanthropy

"In this business it’s easy to confuse luck with brains." - Quote: Jim Simons

read more
Quote: Luis Flavio Nunes – Investing.com

Quote: Luis Flavio Nunes – Investing.com

“The crash wasn’t caused by manipulation or panic. It revealed something more troubling: Bitcoin had already become the very thing it promised to destroy.” – Luis Flavio Nunes – Investing.com

The recent Bitcoin crashes of 2025 and early 2026 were not random market events driven by panic or coordinated manipulation. Rather, they exposed a fundamental paradox that has quietly developed as Bitcoin matured from a fringe asset into an institutional investment vehicle. What began as a rebellion against centralised financial systems has, through the mechanisms of modern finance, recreated many of the same structural vulnerabilities that plagued traditional markets.

The Institutional Transformation

Bitcoin’s journey from obscurity to mainstream acceptance represents one of the most remarkable financial transformations of the past decade. When Satoshi Nakamoto released the Bitcoin whitepaper in 2008, the explicit goal was to create “a purely peer-to-peer electronic cash system” that would operate without intermediaries or central authorities. The cryptocurrency was designed as a direct response to the 2008 financial crisis, offering an alternative to institutions that had proven themselves untrustworthy stewards of capital.

Yet by 2025, Bitcoin had become something quite different. Institutional investors, corporations, and even governments began treating it as a store of value and portfolio diversifier. This shift accelerated dramatically following the approval of Bitcoin spot exchange-traded funds (ETFs) in major markets, which legitimised cryptocurrency as an institutional asset class. What followed was an influx of capital that transformed Bitcoin from a peer-to-peer system into something resembling a leveraged financial instrument.

The irony is profound: the very institutions that Bitcoin was designed to circumvent became its largest holders and most active traders. Corporate treasury departments, hedge funds, and financial firms accumulated Bitcoin positions worth tens of billions of dollars. But they did so using the same tools that had destabilised traditional markets-leverage, derivatives, and interconnected financial relationships.

The Digital Asset Treasury Paradox

The clearest manifestation of this contradiction emerged through Digital Asset Treasury Companies (DATCos). These firms, which manage Bitcoin and other cryptocurrencies for corporate clients, accumulated approximately $42 billion in positions by late 2025.1 The appeal was straightforward: Bitcoin offered superior returns compared to traditional treasury instruments, and companies could diversify their cash reserves whilst potentially generating alpha.

However, these positions were not held in isolation. Many DATCos financed their Bitcoin purchases through debt arrangements, creating leverage ratios that would have been familiar to any traditional hedge fund manager. When Bitcoin’s price declined sharply in November 2025, falling to $91,500 and erasing most of the year’s gains, these overleveraged positions became underwater.1 The result was a cascade of forced selling that had nothing to do with Bitcoin’s utility or technology-it was pure financial mechanics.

By mid-November 2025, DATCo losses had reached $1.4 billion, representing a 40% decline in their aggregate positions.1 More troublingly, analysts estimated that if even 10-15% of these positions faced forced liquidation due to debt covenants or modified Net Asset Value (mNAV) pressures, it could trigger $4.3 to $6.4 billion in selling pressure over subsequent weeks.1 For context, this represented roughly double the selling pressure from Bitcoin ETF outflows that had dominated market headlines.

Market Structure and Liquidity Collapse

What made this forced selling particularly destructive was the simultaneous collapse in market liquidity. Bitcoin’s order book depth at the 1% price band-a key measure of market resilience-fell from approximately $20 million in early October to just $14 million by mid-November, a 33% decline that never recovered.1 Analysts described this as a “deliberate reduction in market-making commitment,” suggesting that professional market makers had withdrawn support precisely when it was most needed.

This combination of forced selling and vanishing liquidity created a toxic feedback loop. Small selling moves produced disproportionately large price movements. When prices fell sharply, leveraged positions across the entire crypto ecosystem faced liquidation. On January 29, 2026, Bitcoin crashed from above $88,000 to below $85,000 in minutes, triggering $1.68 billion in forced selling across cryptocurrency markets.5 The speed and violence of these moves bore no relationship to any fundamental change in Bitcoin’s technology or adoption-they were purely mechanical consequences of leverage unwinding in illiquid markets.

The Retail Psychology Amplifier

Institutional forced selling might have been manageable if retail investors had provided offsetting demand. Instead, retail psychology amplified the downward pressure. Many retail investors, armed with historical price charts and belief in Bitcoin’s four-year halving cycle, began selling preemptively to avoid what they anticipated would be a 70-80% drawdown similar to previous market cycles.1

This created a self-fulfilling prophecy. Retail investors, convinced that a crash was coming based on historical patterns, exited their positions voluntarily. This removed the “conviction-based spot demand” that might have absorbed institutional forced selling.1 Instead of a market where buyers stepped in during weakness, there was only a queue of sellers waiting for lower prices. The belief in the cycle became the mechanism that perpetuated it.

The psychological dimension was particularly striking. Reddit communities filled with discussions of Bitcoin falling to $30,000 or lower, with investors citing historical precedent rather than fundamental analysis.1 The narrative had shifted from “Bitcoin is digital gold” to “Bitcoin is a leveraged Nasdaq ETF.” When Bitcoin gained only 4% year-to-date whilst gold rose 29%, and when AI stocks like C3.ai dropped 54% and Bitcoin crashed in sympathy, the pretence of Bitcoin as an independent asset class evaporated.1

The Macro Backdrop and Data Vacuum

These structural vulnerabilities were exacerbated by macroeconomic uncertainty. In October 2025, a U.S. government shutdown resulted in missing economic data, leaving the Federal Reserve, as the White House stated, “flying blind at a critical period.”1 Without Consumer Price Index and employment reports, Fed rate-cut expectations collapsed from 67% to 43% probability.1

Bitcoin, with its 0.85 correlation to dollar liquidity, sold off sharply as investors struggled to price risk in a data vacuum.1 This revealed another uncomfortable truth: Bitcoin’s price movements had become increasingly correlated with traditional financial markets and macroeconomic conditions. The asset that was supposed to be uncorrelated with fiat currency systems now moved in lockstep with Fed policy expectations and dollar liquidity conditions.

Theoretical Foundations: Understanding the Contradiction

To understand how Bitcoin arrived at this paradoxical state, it is useful to examine the theoretical frameworks that shaped both cryptocurrency’s design and its subsequent institutional adoption.

Hayek’s Denationalisation of Money

Friedrich Hayek’s 1976 work “Denationalisation of Money” profoundly influenced Bitcoin’s philosophical foundations. Hayek argued that government monopolies on currency creation were inherently inflationary and economically destructive. He proposed that competition between private currencies would discipline monetary policy and prevent the kind of currency debasement that had plagued the 20th century. Bitcoin’s fixed supply of 21 million coins was a direct implementation of Hayekian principles-a currency that could not be debased through monetary expansion because its supply was mathematically constrained.

However, Hayek’s framework assumed that competing currencies would be held and used by individuals making rational economic decisions. He did not anticipate a world in which Bitcoin would be held primarily by leveraged financial institutions using it as a speculative asset rather than a medium of exchange. When Bitcoin became a vehicle for institutional leverage rather than a tool for individual monetary sovereignty, it violated the core assumption of Hayek’s theory.

Minsky’s Financial Instability Hypothesis

Hyman Minsky’s Financial Instability Hypothesis provides a more prescient framework for understanding Bitcoin’s recent crashes. Minsky argued that capitalist economies are inherently unstable because of the way financial systems evolve. In periods of stability, investors become increasingly confident and willing to take on leverage. This leverage finances investment and consumption, which generates profits that validate the initial optimism. But this very success breeds complacency. Investors begin to underestimate risk, financial institutions relax lending standards, and leverage ratios climb to unsustainable levels.

Eventually, some shock-often minor in itself-triggers a reassessment of risk. Leveraged investors are forced to sell assets to meet margin calls. These sales drive prices down, which triggers further margin calls, creating a cascade of forced selling. Minsky called this the “Minsky Moment,” and it describes precisely what occurred in Bitcoin markets in late 2025 and early 2026.

The tragedy is that Bitcoin’s design was explicitly intended to prevent Minskyan instability. By removing the ability of central banks to expand money supply and by making the currency supply mathematically fixed, Bitcoin was supposed to eliminate the credit cycles that Minsky identified as the source of financial instability. Yet by allowing itself to be financialised through leverage and derivatives, Bitcoin recreated the exact dynamics it was designed to escape.

Kindleberger’s Manias, Panics, and Crashes

Charles Kindleberger’s historical analysis of financial crises identifies a recurring pattern: displacement (a new investment opportunity emerges), euphoria (prices rise as investors become convinced of unlimited upside), financial distress (early investors begin to exit), and finally panic (a rush for the exits as leverage unwinds). Bitcoin’s trajectory from 2020 to 2026 followed this pattern almost precisely.

The displacement occurred with the approval of Bitcoin ETFs and corporate treasury adoption. The euphoria phase saw Bitcoin reach nearly $100,000 as institutions poured capital into the asset. Financial distress emerged when DATCo positions became underwater and forced selling began. The panic phase manifested in the sharp crashes of late 2025 and early 2026, where $1.68 billion in liquidations could occur in minutes.

What Kindleberger’s framework reveals is that these crises are not failures of individual decision-makers but rather inevitable consequences of how financial systems evolve. Once leverage enters the system, instability becomes structural rather than accidental.

The Centralisation of Bitcoin Ownership

Perhaps the most damning aspect of Bitcoin’s institutional transformation is the concentration of ownership. Whilst Bitcoin was designed as a decentralised system where no single entity could control the network, the distribution of Bitcoin wealth has become increasingly concentrated. Large institutional holders, including corporations, hedge funds, and DATCos, now control a substantial portion of all Bitcoin in existence.

This concentration creates a new form of centralisation-not of the protocol itself, but of the economic incentives that drive price discovery. When a small number of large holders face forced selling, their actions dominate price movements. The market becomes less like a peer-to-peer system of millions of independent participants and more like a traditional financial market where large institutions set prices through their trading activity.

The irony is complete: Bitcoin was created to escape the centralised financial system, yet it has become a vehicle through which that same centralised system operates. The institutions that Bitcoin was designed to circumvent are now its largest holders and most influential participants.

What the Crashes Revealed

The crashes of 2025 and early 2026 were not anomalies or temporary setbacks. They were revelations of structural truths about how Bitcoin had evolved. The asset had retained the volatility and speculative characteristics of an emerging technology whilst acquiring the leverage and interconnectedness of traditional financial markets. It had none of the stability of fiat currency systems (which are backed by government power and tax revenue) and none of the decentralisation of its original design (which had been compromised by institutional concentration).

Bitcoin had become, in the words attributed to Luis Flavio Nunes, “the very thing it promised to destroy.” It had recreated the leverage-driven instability of traditional finance, the concentration of economic power in large institutions, and the vulnerability to forced selling that characterises modern financial markets. The only difference was that these dynamics operated at higher speeds and with greater violence due to the 24/7 nature of cryptocurrency markets and the absence of circuit breakers or trading halts.

The question that emerged from these crashes was whether Bitcoin could evolve beyond this contradictory state. Could it return to its original purpose as a peer-to-peer currency system? Could it shed its role as a leveraged speculative asset? Or would it remain trapped in this paradoxical identity-a decentralised system controlled by centralised institutions, a hedge against financial instability that had become a vehicle for financial instability?

These questions remain unresolved as of early 2026, but the crashes have made clear that Bitcoin’s identity crisis is not merely philosophical. It has material consequences for millions of investors and reveals uncomfortable truths about how financial innovation can be absorbed and repurposed by the very systems it was designed to challenge.

References

1. https://uk.investing.com/analysis/bitcoin-encounters-a-hidden-wave-of-selling-from-overleveraged-treasury-firms-200620267

2. https://www.investing.com/analysis/bitcoin-prices-could-stabilize-as-market-searches-for-new-support-levels-200668467

3. https://ca.investing.com/members/contributors/272097941/opinion/2

4. https://www.investing.com/analysis/crypto-bulls-lost-the-wheel-as-bitcoin-and-ethereum-roll-over-200673726

5. https://investing.com/analysis/golds-12-crash-how-17-billion-in-crypto-liquidations-tanked-precious-metals-200674247?ampMode=1

6. https://www.investing.com/members/contributors/272097941/opinion

7. https://www.investing.com/members/contributors/272097941

8. https://www.investing.com/analysis/cryptocurrency

9. https://au.investing.com/analysis/bitcoin-holds-the-line-near-90k-as-macro-pressure-caps-upside-momentum-200611192

10. https://www.investing.com/crypto/bitcoin/bitcoin-futures

“The crash wasn't caused by manipulation or panic. It revealed something more troubling: Bitcoin had already become the very thing it promised to destroy.” - Quote: Luis Flavio Nunes - Investing.com

read more
Term: AI slop

Term: AI slop

“AI slop refers to low-quality, mass-produced digital content (text, images, video, audio, workflows, agents, outputs) generated by artificial intelligence, often with little effort or meaning, designed to pass as social media or pass off cognitive load in the workplace.” – AI slop

AI slop refers to low-quality, mass-produced digital content created using generative artificial intelligence that prioritises speed and volume over substance and quality.1 The term encompasses text, images, video, audio, and workplace outputs designed to exploit attention economics on social media platforms or reduce cognitive load in professional environments through minimal-effort automation.2,3 Coined in the 2020s, AI slop has become synonymous with digital clutter-content that lacks originality, depth, and meaningful insight whilst flooding online spaces with generic, unhelpful material.1

Key Characteristics

AI slop exhibits several defining features that distinguish it from intentionally created content:

  • Vague and generalised information: Content remains surface-level, offering perspectives and insights already widely available without adding novel value or depth.2
  • Repetitive structuring and phrasing: AI-generated material follows predictable patterns-rhythmic structures, uniform sentence lengths, and formulaic organisation that create a distinctly robotic quality.2
  • Lack of original insight: The content regurgitates existing information from training data rather than generating new perspectives, opinions, or analysis that differentiate it from competing material.2
  • Neutral corporate tone: AI slop typically employs bland, impersonal language devoid of distinctive brand voice, personality, or strong viewpoints.2
  • Unearned profundity: Serious narrative transitions and rhetorical devices appear without substantive foundation, creating an illusion of depth.6

Origins and Evolution

The term emerged in the early 2020s as large language models and image diffusion models accelerated the creation of high-volume, low-quality content.1 Early discussions on platforms including 4chan, Hacker News, and YouTube employed “slop” as in-group slang to describe AI-generated material, with alternative terms such as “AI garbage,” “AI pollution,” and “AI-generated dross” proposed by journalists and commentators.1 The 2025 Word of the Year designation by both Merriam-Webster and the American Dialect Society formalised the term’s cultural significance.1

Manifestations Across Contexts

Social Media and Content Creation: Creators exploit attention economics by flooding platforms with low-effort content-clickbait articles with misleading titles, shallow blog posts stuffed with keywords for search engine manipulation, and bizarre imagery designed for engagement rather than authenticity.1,4 Examples range from surreal visual combinations (Jesus made of spaghetti, golden retrievers performing surgery) to manipulative videos created during crises to push particular narratives.1,5

Workplace “Workslop”: A Harvard Business Review study conducted with Stanford University and BetterUp found that 40% of participating employees received AI-generated content that appeared substantive but lacked genuine value, with each incident requiring an average of two hours to resolve.1 This workplace variant demonstrates how AI slop extends beyond public-facing content into professional productivity systems.

Societal Impact

AI slop creates several interconnected problems. It displaces higher-quality material that could provide genuine utility, making it harder for original creators to earn citations and audience attention.2 The homogenised nature of mass-produced AI content-where competitors’ material sounds identical-eliminates differentiation and creates forgettable experiences that fail to connect authentically with audiences.2 Search engines increasingly struggle with content quality degradation, whilst platforms face challenges distinguishing intentional human creativity from synthetic filler.3

Mitigation Strategies

Organisations seeking to avoid creating AI slop should employ several practices: develop extremely specific prompts grounded in detailed brand voice guidelines and examples; structure reusable prompts with clear goals and constraints; and maintain rigorous human oversight for fact-checking and accuracy verification.2 The fundamental antidote remains cultivating specificity rooted in particular knowledge, tangible experience, and distinctive perspective.6

Related Theorist: Jonathan Gilmore

Jonathan Gilmore, a philosophy professor at the City University of New York, has emerged as a key intellectual voice in analysing AI slop’s cultural and epistemological implications. Gilmore characterises AI-generated material as possessing an “incredibly banal, realistic style” that is deceptively easy for viewers to process, masking its fundamental lack of substance.1

Gilmore’s contribution to understanding AI slop extends beyond mere description into philosophical territory. His work examines how AI-generated content exploits cognitive biases-our tendency to accept information that appears professionally formatted and realistic, even when it lacks genuine insight or originality. This observation proves particularly significant in an era where visual and textual authenticity no longer correlates reliably with truthfulness or value.

By framing AI slop through a philosophical lens, Gilmore highlights a deeper cultural problem: the erosion of epistemic standards in digital spaces. His analysis suggests that AI slop represents not merely a technical problem requiring better filters, but a fundamental challenge to how societies evaluate knowledge, authenticity, and meaningful communication. Gilmore’s work encourages critical examination of the systems and incentive structures that reward volume and speed over depth and truth-a perspective essential for understanding why AI slop proliferates despite its obvious deficiencies.

References

1. https://en.wikipedia.org/wiki/AI_slop

2. https://www.seo.com/blog/ai-slop/

3. https://www.livescience.com/technology/artificial-intelligence/ai-slop-is-on-the-rise-what-does-it-mean-for-how-we-use-the-internet

4. https://edrm.net/2024/07/the-new-term-slop-joins-spam-in-our-vocabulary/

5. https://www.theringer.com/2025/12/17/pop-culture/ai-slop-meaning-meme-examples-images-word-of-the-year

6. https://www.ignorance.ai/p/the-field-guide-to-ai-slop

"AI slop refers to low-quality, mass-produced digital content (text, images, video, audio, workflows, agents, outputs) generated by artificial intelligence, often with little effort or meaning, designed to pass as social media or pass off cognitive load in the workplace." - Term: AI slop

read more
Quote: Jim Simons

Quote: Jim Simons

“One can predict the course of a comet more easily than one can predict the course of Citigroup’s stock. The attractiveness, of course, is that you can make more money successfully predicting a stock than you can a comet.” – Jim Simons – Renaissance Technologies founder

Jim Simons’ observation that “one can predict the course of a comet more easily than one can predict the course of Citigroup’s stock” encapsulates a profound paradox at the heart of modern finance. Yet Simons himself spent a lifetime proving that this apparent unpredictability could be systematically exploited through mathematical rigour. The quote reflects both the genuine complexity of financial markets and the tantalising opportunity they present to those equipped with the right intellectual tools.

Simons made this observation as the founder of Renaissance Technologies, the quantitative hedge fund that would become one of the most successful investment firms in history. The statement reveals his pragmatic philosophy: whilst comets follow the deterministic laws of celestial mechanics, stock prices are influenced by countless human decisions, emotions, and unforeseen events. Yet this very complexity-this apparent chaos-creates inefficiencies that a sufficiently sophisticated mathematical model can exploit for profit.

Jim Simons: The Mathematician Who Decoded Markets

James Harris Simons (1938-2024) was born in Newton, Massachusetts, and demonstrated an early affinity for mathematics that would define his extraordinary career. He earned his Ph.D. in mathematics from the University of California, Berkeley at the remarkably young age of 23, establishing himself as a prodigy in pure mathematics before his unconventional path led him toward finance.

Simons’ early career trajectory was marked by intellectual distinction across multiple domains. He taught mathematics at the Massachusetts Institute of Technology and Harvard University, where he worked alongside some of the finest minds in academia. Between 1964 and 1968, he served on the research staff of the Communications Research Division of the Institute for Defence Analysis, where he contributed to classified cryptographic work, including efforts to break Soviet codes. In 1973, IBM enlisted his expertise to attack Lucifer, an early precursor to the Data Encryption Standard-work that demonstrated his ability to apply mathematical thinking to real-world security challenges.

From 1968 to 1978, Simons chaired the mathematics department at Stony Brook University, building it from scratch into a respected institution. He received the American Mathematical Society’s Oswald Veblen Prize in Geometry, one of the highest honours in his field. By conventional measures, he had achieved the pinnacle of academic success.

Yet Simons harboured interests that set him apart from his peers. He traded stocks and dabbled in soybean futures whilst at Berkeley, and he maintained a fascination with business and finance that his academic colleagues did not share. In interviews, he reflected on feeling like “something of an outsider” throughout his career-immersed in mathematics but never quite feeling like a full member of the academic community. This sense of not fitting into conventional boxes would prove formative.

The Catalyst: Control, Ambition, and the Vietnam War

Simons’ transition from academia to finance was precipitated by both personal circumstances and philosophical conviction. In 1966, he published an article in Newsweek opposing the Vietnam War, a public stance that led to his dismissal from the Institute for Defence Analysis. With three young children and significant debts-he had borrowed money to invest in a manufacturing venture in Colombia-this abrupt termination shook him profoundly. The experience crystallised his realisation that he lacked control over his own destiny when working within established institutions.

This episode proved transformative. Simons came to understand that financial independence equated to autonomy and power. He needed an environment where he could pursue his diverse interests-entrepreneurship, markets, and mathematics-simultaneously. No such environment existed within academia or traditional finance. Therefore, he would create one.

The Birth of Renaissance Technologies: 1978

In 1978, Simons left Stony Brook University to found Monometrics (later renamed Renaissance Technologies in 1982) in a modest strip mall near Stony Brook. The venture began with false starts, but Simons possessed a crucial insight: it should be possible to construct mathematical models of market data to identify profitable trading patterns.

This represented a radical departure from Wall Street convention. Rather than hiring experienced traders and financial professionals, Simons recruited mathematicians, physicists, and computer scientists-individuals of exceptional intellectual calibre who had never worked in finance. As he explained to California magazine: “We didn’t hire anyone who had worked on Wall Street before. We hired people who were very good scientists but who wanted to try something different. And make more money if it worked out.”

This hiring philosophy became Renaissance’s “secret sauce.” Simons assembled a team that included Leonard E. Baum and James Ax, mathematicians of the highest order. These scientists approached markets not as traders seeking intuitive edge, but as researchers seeking to identify statistical patterns and anomalies in vast datasets. They applied techniques from information theory, signal processing, and statistical analysis to construct algorithms that could identify and exploit market inefficiencies.

The Medallion Fund: Unprecedented Success

In 1988, Renaissance established the Medallion Fund, a closed investment vehicle that would become the most profitable hedge fund in history. Between its inception in 1988 and 2018, the Medallion Fund generated over $100 billion in trading profits, achieving a 66.1% average gross annual return (or 39.1% net of fees). These figures are without parallel in investment history. For context, Warren Buffett’s Berkshire Hathaway-widely regarded as the gold standard of long-term investing-has achieved approximately 20% annualised returns over decades.

The Medallion Fund’s success vindicated Simons’ core thesis: whilst individual stock movements may appear random and unpredictable, patterns exist within the noise. By applying sophisticated mathematical models to vast quantities of market data, these patterns could be identified and exploited systematically. The fund’s returns were not the product of luck or market timing, but of rigorous scientific methodology applied to financial data.

Renaissance Technologies also managed three additional funds open to outside investors-the Renaissance Institutional Equities Fund, Renaissance Institutional Diversified Alpha, and Renaissance Institutional Diversified Global Equity Fund-which collectively managed approximately $55 billion in assets as of 2019.

The Theoretical Foundations: Quantitative Finance and Market Microstructure

Simons’ success emerged from a convergence of theoretical advances and technological capability. The intellectual foundations for quantitative finance had been developing throughout the twentieth century, though Simons and Renaissance were among the first to apply these theories systematically at scale.

Eugene Fama and the Efficient Market Hypothesis

Eugene Fama’s Efficient Market Hypothesis (EMH), developed in the 1960s, posited that asset prices fully reflect all available information, making it impossible to consistently outperform the market through analysis. If markets were truly efficient, Simons’ entire enterprise would be theoretically impossible. Yet Simons’ empirical results demonstrated that markets contained exploitable inefficiencies-what economists would later term “market anomalies.” Rather than accepting EMH as gospel, Simons treated it as a hypothesis to be tested against data. His success suggested that whilst markets were broadly efficient, they were not perfectly so, and the gaps could be identified through rigorous statistical analysis.

Harry Markowitz and Modern Portfolio Theory

Harry Markowitz’s pioneering work on portfolio optimisation in the 1950s established the mathematical framework for understanding risk and return. Markowitz demonstrated that investors could construct optimal portfolios by balancing expected returns against volatility, measured as standard deviation. Renaissance built upon this foundation, but extended it dramatically. Whilst Markowitz’s approach was largely static, Renaissance employed dynamic models that continuously adjusted positions based on evolving market conditions and statistical signals.

Statistical Arbitrage and Market Microstructure

Renaissance’s core methodology centred on statistical arbitrage-identifying pairs or groups of securities whose prices had deviated from their historical relationships, then betting that these relationships would revert to equilibrium. This required deep understanding of market microstructure: the mechanics of how prices form, how information propagates through markets, and how trading activity itself influences prices. Simons and his team studied these phenomena with the rigour of physicists studying natural systems.

Information Theory and Signal Processing

Simons’ background in cryptography and information theory proved invaluable. Just as cryptographers extract meaningful signals from noise, Renaissance’s algorithms extracted trading signals from the apparent randomness of price movements. The team applied techniques from signal processing-originally developed for telecommunications and radar-to identify patterns in financial data that others overlooked.

The Philosophical Implications of Simons’ Quote

Simons’ observation about comets versus stocks reflects a deeper philosophical position about the nature of complexity and predictability. Comets follow deterministic equations derived from Newton’s laws of motion and gravitation. Their trajectories are, in principle, perfectly predictable given sufficient initial conditions. Yet they are also distant, their behaviour unaffected by human activity.

Stock prices, by contrast, emerge from the aggregated decisions of millions of participants acting on incomplete information, subject to psychological biases, and influenced by unpredictable events. This apparent chaos seems to defy prediction. Yet Simons recognised that this very complexity creates opportunity. The inefficiencies that arise from human psychology, information asymmetries, and market structure are precisely what quantitative models can exploit.

The quote also embodies Simons’ pragmatism. He was not interested in predicting stocks with perfect accuracy-an impossible task. Rather, he sought to identify statistical edges: situations where the probability distribution of future returns was sufficiently favourable to generate consistent profits over time. This is fundamentally different from prediction in the deterministic sense. It is prediction in the probabilistic sense-identifying where odds favour the investor.

Legacy and Impact on Finance

Simons’ success catalysed a revolution in finance. The quantitative approach that Renaissance pioneered has become increasingly dominant. Today, algorithmic and quantitative trading account for a substantial portion of market activity. Universities have established entire programmes in financial engineering and computational finance. The intellectual framework that Simons helped develop-treating markets as complex systems amenable to mathematical analysis-has become orthodoxy.

In 2006, Simons was named Financial Engineer of the Year by the International Association of Financial Engineers, recognition of his transformative impact on the field. His personal wealth accumulated accordingly: in 2020, he was estimated to have earned $2.6 billion, making him one of the highest-earning individuals in finance.

Yet Simons’ later life demonstrated that his intellectual curiosity extended far beyond finance. After retiring as chief executive officer of Renaissance Technologies in 2010, he devoted himself increasingly to the Simons Foundation, which he and his wife Marilyn had established. The foundation has become one of the world’s leading supporters of fundamental scientific research, funding work in mathematics, theoretical physics, computer science, and biology. In 2012, Simons convened a seminar bringing together leading scientists from diverse fields, which led to the creation of Simons Collaborations-programmes supporting interdisciplinary research on fundamental questions about the nature of reality and life itself.

In 2004, Simons founded Math for America, a nonprofit organisation dedicated to improving mathematics education in American public schools by recruiting and supporting highly qualified teachers. This initiative reflected his conviction that mathematical literacy is foundational to scientific progress and economic competitiveness.

Conclusion: The Outsider Who Built a New World

Jim Simons’ career exemplifies the power of intellectual courage and the willingness to challenge established paradigms. He was, by his own admission, an outsider-never quite fitting into the boxes that academia and conventional finance offered. Rather than accepting these constraints, he created an entirely new environment where his diverse talents could flourish: a place where pure mathematics, empirical data analysis, and financial markets intersected.

His observation about comets and stocks captures this perfectly. Whilst others accepted that stock markets were fundamentally unpredictable, Simons saw opportunity in complexity. He assembled a team of the world’s finest scientists and tasked them with finding patterns in apparent chaos. The result was not merely financial success, but a transformation of how finance itself is understood and practised.

Simons passed away on 10 May 2024, at the age of 86, leaving behind a legacy that extends far beyond Renaissance Technologies. He demonstrated that intellectual rigour, scientific methodology, and collaborative excellence can generate both extraordinary financial returns and profound contributions to human knowledge. His life stands as a testament to the proposition that the greatest opportunities often lie at the intersection of disciplines, and that those willing to think differently can reshape entire fields.

References

1. https://www.jermainebrown.org/posts/why-jim-simons-founded-renaissance-technologies

2. https://en.wikipedia.org/wiki/Jim_Simons

3. https://inspire.berkeley.edu/p/promise-spring-2016/jim-simons-life-left-turns/

4. https://www.simonsfoundation.org/2024/05/10/remembering-the-life-and-careers-of-jim-simons/

5. https://today.ucsd.edu/story/jim-simons

6. https://news.stonybrook.edu/university/jim-simons-a-life-of-scholarship-leadership-and-philanthropy/

"One can predict the course of a comet more easily than one can predict the course of Citigroup’s stock. The attractiveness, of course, is that you can make more money successfully predicting a stock than you can a comet." - Quote: Jim Simons

read more
Quote: Andrew Ng – AI guru. Coursera founder

Quote: Andrew Ng – AI guru. Coursera founder

“I find that we’ve done this “let a thousand flowers bloom” bottom-up [AI] innovation thing, and for the most part, it’s led to a lot of nice little things but nothing transformative for businesses.” – Andrew Ng – AI guru, Coursera founder

In a candid reflection at the World Economic Forum 2026 session titled ‘Corporate Ladders, AI Reshuffled,’ Andrew Ng critiques the prevailing ‘let a thousand flowers bloom’ approach to AI innovation. He argues that while this bottom-up strategy has produced numerous incremental tools, it falls short of delivering the profound business transformations required in today’s competitive landscape1,3,4. This perspective emerges from Ng’s deep immersion in AI’s evolution, where he observes a landscape brimming with potential yet hampered by fragmented efforts.

Andrew Ng: The Architect of Modern AI Education and Research

Andrew Ng stands as one of the foremost figures in artificial intelligence, often dubbed an ‘AI guru’ for his pioneering contributions. A British-born computer scientist, Ng co-founded Coursera in 2012, revolutionising online education by making high-quality courses accessible worldwide, with a focus on machine learning and AI1,4. Prior to that, he led the Google Brain project from 2011 to 2012, establishing one of the first large-scale deep learning initiatives that laid foundational work for advancements now powering Google DeepMind1.

Today, Ng heads DeepLearning.AI, offering practical AI training programmes, and serves as managing general partner at AI Fund, investing in transformative AI startups. His career also includes professorships at Stanford University and Baidu’s chief scientist role, where he scaled AI applications in China. At Davos 2026, Ng highlighted Google’s resurgence with Gemini 3 while emphasising the ‘white hot’ AI ecosystem’s opportunities for players like Anthropic and OpenAI1. He consistently advocates for upskilling, noting that ‘a person that uses AI will be so much more productive, they will replace someone that doesn’t,’ countering fears of mass job losses with a vision of augmented human capabilities3.

Context of the Quote: Davos 2026 and the Shift from Experimentation to Enterprise Impact

Delivered in January 2026 during a YouTube live session on how AI is reshaping jobs, skills, careers, and workflows, Ng’s remark underscores a pivotal moment in AI adoption[Source]. Amid Davos discussions, he addressed the tension between hype and reality: bottom-up innovation has yielded ‘nice little things’ like chatbots and coding assistants, but businesses crave systemic overhauls in areas such as travel, retail, and domain-specific automation1. Ng points to underinvestment in the application layer, urging a pivot towards targeted, top-down strategies to unlock transformative value-echoing themes of agentic AI, task automation, and workflow integration[TAGS].

This aligns with his broader Davos narrative, including calls for open-source AI to foster sovereignty (as for India) and pragmatic workforce reskilling, where AI handles 30-40% of tasks, leaving humans to manage the rest2,3. The session, part of WEF’s exploration of AI’s role in corporate structures, signals a maturing field moving beyond foundational models to enterprise-grade deployment.

Leading Theorists on AI Innovation Paradigms: From Bottom-Up Bloom to Structured Transformation

Ng’s critique builds on foundational theories of innovation in AI, drawing from pioneers who shaped the debate between decentralised experimentation and directed progress.

  • Yann LeCun, Yoshua Bengio, and Geoffrey Hinton (The Godfathers of Deep Learning): These Turing Award winners ignited the deep learning revolution in the 2010s. Their bottom-up approach-exemplified by convolutional neural networks and backpropagation-mirrored Mao Zedong’s ‘let a thousand flowers bloom’ metaphor, encouraging diverse neural architectures. Yet, as Ng notes, this has led to proliferation without proportional business disruption, prompting calls for vertical integration.
  • Jensen Huang (NVIDIA CEO): Huang’s five-layer AI stack-energy, silicon, cloud, foundational models, applications-provides the theoretical backbone for Ng’s views. He emphasises that true transformation demands investment atop the stack, not just base layers, aligning with Ng’s push beyond ‘nice little things’ to workflow automation5.
  • Fei-Fei Li (Stanford Vision Lab): Ng’s collaborator and ‘Godmother of AI,’ Li advocates human-centred AI, stressing application-layer innovations for real-world impact, such as in healthcare imaging-reinforcing the need for focused enterprise adoption.
  • Demis Hassabis (Google DeepMind): From Ng’s Google Brain era, Hassabis champions unified labs for scalable AI, critiquing siloed efforts in favour of top-down orchestration, much like Ng’s prescription for business transformation.

These theorists collectively highlight a consensus: while bottom-up innovation democratised AI tools, the next phase requires deliberate, top-down engineering to embed AI into core business processes, driving productivity and competitive edges.

Implications for Businesses and the AI Ecosystem

Ng’s insight challenges leaders to reassess AI strategies, prioritising agentic systems that automate tasks and elevate human judgement. As the AI landscape heats up-with models like Gemini 3, Llama-4, and Qwen-2-opportunities abound for those bridging the application gap1,2. This perspective not only contextualises current hype but guides towards sustainable, transformative deployment.

References

1. https://www.moneycontrol.com/news/business/davos-summit/davos-2026-google-s-having-a-moment-but-ai-landscape-is-white-hot-says-andrew-ng-13779205.html

2. https://www.aicerts.ai/news/andrew-ng-open-source-ai-india-call-resonates-at-davos/

3. https://www.storyboard18.com/brand-makers/davos-2026-andrew-ng-says-fears-of-ai-driven-job-losses-are-exaggerated-87874.htm

4. https://www.youtube.com/watch?v=oQ9DTjyfIq8

5. https://globaladvisors.biz/2026/01/23/the-ai-signal-from-the-world-economic-forum-2026-at-davos/

"I find that we've done this "let a thousand flowers bloom" bottom-up [AI] innovation thing, and for the most part, it's led to a lot of nice little things but nothing transformative for businesses." - Quote: Andrew Ng - AI guru. Coursera founder

read more
Quote: Bill Gurley

Quote: Bill Gurley

“There are people in this world who view everything as a zero sum game and they will elbow you out the first chance they can get. And so those shouldn’t be your peers.” – Bill Gurley – GP at Benchmark

This incisive observation comes from Bill Gurley, a General Partner at Benchmark Capital, shared during his appearance on Tim Ferriss’s podcast in late 2025. In the discussion titled ‘Bill Gurley – Investing in the AI Era, 10 Days in China, and Important Life Lessons,’ Gurley outlines two key tests for selecting peers and collaborators: trust and a shared interest in learning. He warns against those with a zero-sum mentality-individuals who see success as limited, leading them to undermine others for personal gain. Instead, he advocates pushing such people aside to foster environments of mutual support and growth.3,6

The quote resonates deeply in careers, entrepreneurship, and high-stakes fields like venture capital, where collaboration can amplify success. Gurley, drawing from decades in tech investing, emphasises that true progress thrives in positive-sum dynamics, where celebrating peers’ wins benefits all.1,3

Bill Gurley’s Backstory

Bill Gurley is a towering figure in Silicon Valley, renowned for his prescient investments and analytical rigour. A General Partner at Benchmark Capital since 1999, he has backed transformative companies including Uber, Airbnb, Zillow, and Grubhub, generating billions in returns. His early career included roles at Morgan Stanley and as an executive at Compaq Computers, followed by an MBA from the University of Texas and a Harvard undergraduate degree.1,2

Gurley’s philosophy rejects rigid rules in favour of asymmetric upside-focusing on ‘what could go right’ rather than minimising losses. He famously critiques macroeconomics as a ‘silly waste of time’ for investors and champions products that are ‘bought, not sold,’ with high-quality, recurring revenue.1,2 An avid sports fan and athlete, he weaves analogies like ‘muscle memory’ into his insights, reminding entrepreneurs of past downturns like 1999 to build resilience.2 Beyond investing, Gurley blogs prolifically on ‘Above the Crowd,’ dissecting marketplaces, network effects, and economic myths, such as the fallacy of zero-sum thinking in microeconomics.5

Context of Zero-Sum Thinking in Careers and Investing

Gurley’s advice counters the pervasive zero-sum worldview, where one person’s gain is another’s loss. He argues life and business are not zero-sum: ‘Don’t worry about proprietary advantage. It is not a zero-sum game.’1 Celebrate peers’ accomplishments to build collaborative networks that propel collective success.1 This mindset aligns with his investment strategy, prioritising demand aggregation and true network effects over cut-throat competition.1,2

In the Tim Ferriss interview, Gurley ties this to team-building, invoking sports leaders like Sam Hinkie for disciplined, curiosity-driven cultures. He contrasts this with zero-sum actors who erode trust, essential for long-term performance across domains.3

Leading Theorists on Zero-Sum vs Positive-Sum Games

John Nash (1928-2015), the Nobel-winning mathematician behind Nash Equilibrium, revolutionised game theory. His work shows scenarios need not be zero-sum; equilibria emerge where players cooperate for mutual benefit, influencing economics, evolution, and AI strategy.

Robert Wright, in Nonzero: The Logic of Human Destiny (2000), posits history evolves towards positive-sum complexity. Trade, technology, and information sharing create interdependence, countering zero-sum tribalism-echoing Gurley’s peer advice.

Yuval Noah Harari, author of Sapiens, explores how shared myths enable large-scale cooperation, turning potential zero-sum conflicts into positive-sum societies through trust and collective fictions.

Elinor Ostrom (1933-2012), Nobel economist, demonstrated via empirical studies that communities self-govern common resources without zero-sum tragedy, through trust-based rules-validating Gurley’s emphasis on reliable peers.

These theorists underpin Gurley’s practical wisdom: reject zero-sum peers to unlock positive-sum opportunities in careers and ventures.1,3,5

Related Insights from Bill Gurley

  • “It’s called asymmetric returns. If you invest in something that doesn’t work, you lose one times your money. If you miss Google, you lose 10,000 times your money.”1,2
  • “Everybody has the will to win. People don’t have the will to practice.” (Favourite from Bobby Knight)1
  • “Truly great products are bought, not sold.”1
  • “Life is a use or lose it proposition.” (From partner Kevin Harvey)1

References

1. https://www.antoinebuteau.com/lessons-from-bill-gurley/

2. https://25iq.com/2016/10/14/a-half-dozen-more-things-ive-learned-from-bill-gurley-about-investing/

3. https://tim.blog/2025/12/17/bill-gurley-running-down-a-dream/

4. https://macroops.substack.com/p/the-bill-gurley-chronicles-part-i

5. https://macro-ops.com/the-bill-gurley-chronicles-an-above-the-crowd-mba-on-vcs-marketplaces-and-early-stage-investing/

6. https://www.podchemy.com/notes/840-bill-gurley-investing-in-the-ai-era-10-days-in-china-and-important-life-lessons-from-bob-dylan-jerry-seinfeld-mrbeast-and-more-06a5cd0f-d113-5200-bbc0-e9f57705fc2c

"There are people in this world who view everything as a zero sum game and they will elbow you out the first chance they can get. And so those shouldn't be your peers." - Quote: Bill Gurley

read more
Quote: Andrew Ng – AI guru, Coursera founder

Quote: Andrew Ng – AI guru, Coursera founder

“My most productive developers are actually not fresh college grads; they have 10, 20 years of experience in coding and are on top of AI… one tier down… is the fresh college grads that really know how to use AI… one tier down from that is the people with 10 years of experience… the least productive that I would never hire are the fresh college grads that… do not know AI.” – Andrew Ng – AI guru, Coursera founder

In a candid discussion at the World Economic Forum 2026 in Davos, Andrew Ng unveiled a provocative hierarchy of developer productivity, prioritising AI fluency over traditional experience. Delivered during the session ‘Corporate Ladders, AI Reshuffled,’ this perspective challenges conventional hiring norms amid AI’s rapid evolution. Ng’s remarks, captured in a live YouTube panel on 19 January 2026, underscore how artificial intelligence is redefining competence in software engineering.

Andrew Ng: The Architect of Modern AI Education

Andrew Ng stands as one of the foremost pioneers in artificial intelligence, blending academic rigour with entrepreneurial vision. A British-born computer scientist, he earned his PhD from the University of California, Berkeley, and later joined Stanford University, where he co-founded the Stanford AI Lab. Ng’s breakthrough came with his development of one of the first large-scale online courses on machine learning in 2011, which attracted over 100,000 students and laid the groundwork for massive open online courses (MOOCs).

In 2012, alongside Daphne Koller, he co-founded Coursera, transforming global access to education by partnering with top universities to offer courses in AI, data science, and beyond. The platform now serves millions, democratising skills essential for the AI age. Ng also led Baidu’s AI Group as Chief Scientist from 2014 to 2017, scaling deep learning applications at an industrial level. Today, as founder of DeepLearning.AI and managing general partner at AI Fund, he invests in and educates on practical AI deployment. His influence extends to Google Brain, which he co-founded in 2011, pioneering advancements in deep learning that power today’s generative models.

Ng’s Davos appearances, including 2026 interviews with Moneycontrol and others, consistently advocate for AI optimism tempered by pragmatism. He dismisses fears of an AI bubble in applications while cautioning on model training costs, and stresses upskilling: ‘A person that uses AI will be so much more productive, they will replace someone that doesn’t use AI.’1,3

Context of the Quote: AI’s Disruption of Corporate Ladders

The quote emerged from WEF 2026’s exploration of how AI reshuffles organisational hierarchies and talent pipelines. Ng argued that AI tools amplify human capabilities unevenly, creating a new productivity spectrum. Seasoned coders who master AI-such as large language models for code generation-outpace novices, while AI-illiterate veterans lag. This aligns with his broader Davos narrative: AI handles 30-40% of many jobs’ tasks, leaving humans to focus on the rest, but only if they adapt.3

Ng highlighted real-world shifts in Silicon Valley, where AI inference demand surges, throttling teams due to capacity limits. He urged infrastructure build-out and open-source adoption, particularly for nations like India, warning against vendor lock-in: ‘If it’s open, no one can mess with it.’2 Fears of mass job losses? Overhyped, per Ng-layoffs stem more from post-pandemic corrections than automation.3

Leading Theorists on AI, Skills, and Future Work

Ng’s views echo and extend seminal theories on technological unemployment and skill augmentation.

  • David Autor: MIT economist whose ‘skill-biased technological change’ framework (1990s onwards) posits automation displaces routine tasks but boosts demand for non-routine cognitive skills. Ng’s hierarchy mirrors this: AI supercharges experienced workers’ judgement while sidelining routine coders.3
  • Erik Brynjolfsson and Andrew McAfee: In ‘The Second Machine Age’ (2014), they describe how digital technologies widen productivity gaps, favouring ‘superstars’ who leverage tools. Ng’s top tier-AI-savvy veterans-embodies this ‘winner-takes-more’ dynamic in coding.1
  • Daron Acemoglu and Pascual Restrepo: Their ‘task-based’ model (2010s) quantifies automation’s impact: AI automates coding subtasks, but complements human oversight. Ng’s 30-40% task automation estimate directly invokes this, predicting productivity booms for adapters.3
  • Fei-Fei Li: Ng’s Stanford colleague and ‘Godmother of AI Vision,’ she emphasises human-AI collaboration. Her work on multimodal AI reinforces Ng’s call for developers to integrate AI into workflows, not replace manual toil.
  • Yann LeCun, Geoffrey Hinton, and Yoshua Bengio: The ‘Godfathers of Deep Learning’ (Turing Award 2018) enabled tools like those Ng champions. Their foundational neural network advances underpin modern code assistants, validating Ng’s tiers where AI fluency trumps raw experience.

These theorists collectively frame AI as an amplifier, not annihilator, of labour-resonating with Ng’s prescription for careers: master AI or risk obsolescence. As workflows agenticise, coding evolves from syntax drudgery to strategic orchestration.

Implications for Careers and Skills

Ng’s ladder demands immediate action: prioritise AI literacy via platforms like Coursera, fine-tune open models like Llama-4 or Qwen-2, and rebuild talent pipelines around meta-skills like prompt engineering and bias auditing.2,5 For IT powerhouses like India’s $280 billion services sector, upskilling velocity is non-negotiable.6 In this reshuffled landscape, productivity hinges not on years coded, but on AI mastery.

References

1. https://www.moneycontrol.com/news/business/davos-summit/davos-2026-are-we-in-an-ai-bubble-andrew-ng-says-it-depends-on-where-you-look-13779435.html

2. https://www.aicerts.ai/news/andrew-ng-open-source-ai-india-call-resonates-at-davos/

3. https://www.storyboard18.com/brand-makers/davos-2026-andrew-ng-says-fears-of-ai-driven-job-losses-are-exaggerated-87874.htm

4. https://www.youtube.com/watch?v=oQ9DTjyfIq8

5. https://globaladvisors.biz/2026/01/23/the-ai-signal-from-the-world-economic-forum-2026-at-davos/

6. https://economictimes.com/tech/artificial-intelligence/india-must-speed-up-ai-upskilling-coursera-cofounder-andrew-ng/articleshow/126703083.cms

"My most productive developers are actually not fresh college grads; they have 10, 20 years of experience in coding and are on top of AI... one tier down... is the fresh college grads that really know how to use AI... one tier down from that is the people with 10 years of experience... the least productive that I would never hire are the fresh college grads that... do not know AI." - Quote: Andrew Ng - AI guru, Coursera founder

read more
Term: Read the room

Term: Read the room

“To read the room means to assess and understand the collective mood, attitudes, or dynamics of a group of people and adjust your behavior or communication accordingly.” – Read the room

“To read the room” means to assess and understand the collective mood, attitudes, or dynamics of a group of people in a particular setting, and to adjust one’s behaviour or communication accordingly1,3. This idiom emphasises emotional intelligence, enabling individuals to gauge the emotions, thoughts, and reactions of others through nonverbal cues, body language, and the overall atmosphere2,4.

Originating from informal English usage, the phrase is commonly applied in social, professional, and online contexts. For instance, a dinner party host might “read the room” to determine if guests are enjoying themselves or tiring, deciding whether to open another bottle of wine1. In meetings or video calls, it involves analysing general mood to adapt presentations, as visibility of only shoulders and faces can make this challenging1. Sales professionals use it to pick up nonverbal cues during pitches3,4, while social media users are advised to “read the room” before posting to avoid backlash, as seen in Kylie Jenner’s 2021 GoFundMe post that appeared tone-deaf amid economic hardship2.

Key Contexts and Applications

  • Workplace and Meetings: Essential for effective communication; teachers “read the room” to avoid boring students, salespeople adjust pitches if the audience seems worried4.
  • Social Settings: Prevents missteps like telling jokes in a serious atmosphere, which is a classic “failure to read the room”4.
  • Online and Public Communication: Involves anticipating audience reactions to posts or statements for maximum engagement and minimal controversy2.

The skill relies on observing body language-such as foot direction or shoulder positioning-and intuition to interpret the prevailing vibe4. It enhances interpersonal reactions and is crucial for authentic, context-sensitive interactions2.

Best Related Strategy Theorist: Daniel Goleman

Daniel Goleman, a pioneering psychologist and science journalist, is the foremost theorist linked to “read the room” through his development of **emotional intelligence (EI)**, the core ability underpinning this idiom. Goleman popularised EI in his seminal 1995 book Emotional Intelligence: Why It Can Matter More Than IQ, arguing that EI-encompassing self-awareness, self-regulation, motivation, empathy, and social skills-often predicts success more than traditional IQ[supplied knowledge].

Born in 1946 in Stockton, California, Goleman earned a PhD in psychology from Harvard University in 1971, specialising in meditation and brain science. His early career as a New York Times science reporter (1972-1996) covered behavioural and brain sciences, leading to books like Vital Lies, Simple Truths (1985). Goleman’s relationship to “read the room” stems directly from EI’s social awareness component, particularly empathy and organisational awareness-skills for reading group emotions and dynamics to influence effectively[supplied knowledge]. He describes this as “reading the room” in leadership contexts, applying it to executives who attune to team moods for better decision-making.

Goleman’s work with the Hay Group (now Korn Ferry) developed EI assessments used in corporate training, reinforcing practical strategies for communication and behaviour adjustment. His biography reflects a blend of research and application: influenced by mindfulness studies in India during the 1970s, he bridged Eastern practices with Western psychology. Later books like Primal Leadership (2002, co-authored) apply EI to leadership, explicitly linking it to sensing group climates-a direct parallel to the term[supplied knowledge]. Goleman’s theories provide the scientific foundation for “reading the room” as a strategic tool in business, education, and personal interactions.

References

1. https://plainenglish.com/lingo/read-the-room/

2. https://1832communications.com/blog/read-room/

3. https://dictionary.cambridge.org/us/dictionary/english/read-the-room

4. https://www.youtube.com/watch?v=cRRlG39TKEA

"To read the room means to assess and understand the collective mood, attitudes, or dynamics of a group of people and adjust your behavior or communication accordingly." - Term: Read the room

read more
Quote: Microsoft

Quote: Microsoft

“DeepSeek’s success reflects growing Chinese momentum across Africa, a trend that may continue to accelerate in 2026.” – Microsoft – January 2026

The quote originates from Microsoft’s Global AI Adoption in 2025 report, published by the company’s AI Economy Institute and detailed in a January 2026 blog post on ‘On the Issues’. It highlights the rapid ascent of DeepSeek, a Chinese open-source AI platform, in African markets. Microsoft notes that DeepSeek’s free access and strategic partnerships have driven adoption rates 2 to 4 times higher in Africa than in other regions, positioning it as a key factor in China’s expanding technological influence.4,5

Backstory on the Source: Microsoft’s Perspective

Microsoft, a global technology leader with deep investments in AI through partnerships like OpenAI, tracks worldwide AI diffusion to inform its strategy. The 2025 report analyses user data across countries, revealing how accessibility shapes adoption. While Microsoft acknowledges its stake in broader AI proliferation, the analysis remains data-driven, emphasising DeepSeek’s role in underserved markets without endorsing geopolitical shifts.1,2,4

DeepSeek holds significant market shares in Africa: 16-20% in Ethiopia, Tunisia, Malawi, Zimbabwe, and Madagascar; 11-14% in Uganda and Niger. This contrasts with low uptake in North America and Europe, where Western models dominate.1,2,3

DeepSeek: The Chinese AI Challenger

Founded in 2023, DeepSeek is a Hangzhou-based startup rivalling OpenAI’s ChatGPT with cost-effective, open-source models under an MIT licence. Its free chatbot eliminates barriers like subscription fees or credit cards, appealing to price-sensitive regions. The January 2025 release of its R1 model, praised in Nature as a ‘landmark paper’ co-authored by founder Liang Wenfeng, demonstrated advanced reasoning for math and coding at lower costs.2,4

Strategic distribution via Huawei phones as default chatbots, plus partnerships and telecom integrations, propelled its growth. Adoption peaks in China (89%), Russia (43%), Belarus (56%), Cuba (49%), Iran (25%), and Syria (23%). Microsoft warns this could serve as a ‘geopolitical instrument’ for Chinese influence where US services face restrictions.2,3,4

Broader Implications for Africa and the Global South

Africa’s AI uptake accelerates via free platforms like DeepSeek, potentially onboarding the ‘next billion users’ from the global South. Factors include Huawei’s infrastructure push and awareness campaigns. However, concerns arise over biases, such as restricted political content aligned with Chinese internet access, and security risks prompting bans in the US, Australia, Germany, and even Microsoft internally.1,2

Leading Theorists on AI Geopolitics and Global Adoption

  • Lavista Ferres (Microsoft AI researcher): Leads the lab behind the report. Observes DeepSeek’s technical strengths but notes political divergences, predicting influence on global discourse.2
  • Liang Wenfeng (DeepSeek founder): Drives open-source innovation, authoring peer-reviewed work on efficient AI models that challenge US dominance.2
  • Walid Kéfi (AI commentator): Analyses Africa’s generative AI surge, crediting free platforms for scaling adoption amid infrastructure challenges.1

These insights underscore a pivotal shift: AI’s future hinges on openness and accessibility, reshaping power dynamics between US and Chinese ecosystems.4

References

1. https://www.ecofinagency.com/news/1301-51867-microsoft-study-maps-africa-s-generative-ai-uptake-as-free-platforms-drive-adoption

2. https://abcnews.go.com/Technology/wireStory/deepseeks-ai-gains-traction-developing-nations-microsoft-report-129021507

3. https://www.euronews.com/next/2026/01/09/deepseeks-ai-gains-traction-in-developing-nations-microsoft-report-says

4. https://www.microsoft.com/en-us/corporate-responsibility/topics/ai-economy-institute/reports/global-ai-adoption-2025/

5. https://blogs.microsoft.com/on-the-issues/2026/01/08/global-ai-adoption-in-2025/

6. https://www.cryptopolitan.com/microsoft-says-china-beating-america-in-ai/

“DeepSeek’s success reflects growing Chinese momentum across Africa, a trend that may continue to accelerate in 2026.” - Quote: Microsoft

read more
Quote: Andrew Ng – AI guru, Coursera founder

Quote: Andrew Ng – AI guru, Coursera founder

“I think one of the challenges is, because AI technology is still evolving rapidly, the skills that are going to be needed in the future are not yet clear today. It depends on lifelong learning.” – Andrew Ng – AI guru, Coursera founder

Delivered during a session on Corporate Ladders, AI Reshuffled at the World Economic Forum in Davos in January 2026, this insight from Andrew Ng captures the essence of navigating an era where artificial intelligence advances at breakneck speed. Ng’s words underscore a pivotal shift: as AI reshapes jobs and workflows, the uncertainty of future skills demands a commitment to continuous adaptation1,2.

Andrew Ng: The Architect of Modern AI Education

Andrew Ng stands as one of the foremost figures in artificial intelligence, often dubbed an AI guru for his pioneering contributions to machine learning and online education. A British-born computer scientist, Ng co-founded Coursera in 2012, revolutionising access to higher education by partnering with top universities to offer massive open online courses (MOOCs). His platforms, including DeepLearning.AI and Landing AI, have democratised AI skills, training millions worldwide2,3.

Ng’s career trajectory is marked by landmark roles: he led the Google Brain project, which advanced deep learning at scale, and served as chief scientist at Baidu, applying AI to real-world applications in search and autonomous driving. As managing general partner at AI Fund, he invests in startups bridging AI with practical domains. At Davos 2026, Ng addressed fears of AI-driven job losses, arguing they are overstated. He broke jobs into tasks, noting AI handles only 30-40% currently, boosting productivity for those who adapt: ‘A person that uses AI will be so much more productive, they will replace someone that doesn’t use AI’2,3. His emphasis on coding as a ‘durable skill’-not for becoming engineers, but for building personalised software to automate workflows-aligns directly with the quoted challenge of unclear future skills1.

The Broader Context: AI’s Impact on Jobs and Skills at Davos 2026

The quote emerged amid Davos discussions on agentic AI systems-autonomous agents managing end-to-end workflows-pushing humans towards oversight, judgement, and accountability. Ng highlighted meta-cognitive agility: shifting from perishable technical skills to ‘learning to learn’1. This resonates with global concerns; IMF’s Kristalina Georgieva noted one in ten jobs in advanced economies already need new skills, with labour markets unprepared1. Ng urged upskilling, especially for regions like India, warning its IT services sector risks disruption without rapid AI literacy3,5.

Corporate strategies are evolving: the T-shaped model promotes AI literacy across functions (breadth) paired with irreplaceable domain expertise (depth). Firms rebuild talent ladders, replacing grunt work with AI-supported apprenticeships fostering early decision-making1. Ng’s optimism tempers hype; AI improves incrementally, not in dramatic leaps, yet demands proactive reskilling3.

Leading Theorists Shaping AI, Skills, and Lifelong Learning

Ng’s views build on foundational theorists in AI and labour economics:

  • Geoffrey Hinton, Yann LeCun, and Yoshua Bengio (the ‘Godfathers of AI’): Pioneered deep learning, enabling today’s breakthroughs. Hinton, Ng’s early collaborator at Google Brain, warns of AI risks but affirms its transformative potential for productivity2. Their work underpins Ng’s task-based job analysis.
  • Erik Brynjolfsson and Andrew McAfee (MIT): In ‘The Second Machine Age’, they theorise how digital technologies complement human skills, amplifying ‘non-routine’ cognitive tasks. This mirrors Ng’s productivity shift, where AI augments rather than replaces1,2.
  • Carl Benedikt Frey and Michael Osborne (Oxford): Their 2013 study quantified automation risks for 702 occupations, sparking debates on reskilling. Ng extends this by focusing on partial automation (30-40%) and lifelong learning imperatives2.
  • Daron Acemoglu (MIT): Critiques automation’s wage-polarising effects, advocating ‘so-so technologies’ that automate mid-skill tasks. Ng counters with optimism for human-AI collaboration via upskilling3.

These theorists converge on a consensus: AI disrupts routines but elevates human judgement, creativity, and adaptability-skills honed through lifelong learning, as Ng advocates.

Ng’s prescience positions this quote as a clarion call for individuals and organisations to embrace uncertainty through perpetual growth in an AI-driven world.

References

1. https://globaladvisors.biz/2026/01/23/the-ai-signal-from-the-world-economic-forum-2026-at-davos/

2. https://www.storyboard18.com/brand-makers/davos-2026-andrew-ng-says-fears-of-ai-driven-job-losses-are-exaggerated-87874.htm

3. https://www.moneycontrol.com/news/business/davos-summit/davos-2026-ai-is-continuously-improving-despite-perception-that-excitement-has-faded-says-andrew-ng-13780763.html

4. https://www.aicerts.ai/news/andrew-ng-open-source-ai-india-call-resonates-at-davos/

5. https://economictimes.com/tech/artificial-intelligence/india-must-speed-up-ai-upskilling-coursera-cofounder-andrew-ng/articleshow/126703083.cms

"I think one of the challenges is, because AI technology is still evolving rapidly, the skills that are going to be needed in the future are not yet clear today. It depends on lifelong learning." - Quote: Andrew Ng - AI guru. Coursera founder

read more

Download brochure

Introduction brochure

What we do, case studies and profiles of some of our amazing team.

Download

Our latest podcasts on Spotify

Sign up for our newsletters - free

Global Advisors | Quantified Strategy Consulting