‌
Global Advisors
‌
‌
‌

Our selection of the top business news sources on the web.

AM edition. Issue number 1216

Latest 10 stories. Click the button for more.

Read More
‌
‌
‌

Quote: Bill Gurley

"There are people in this world who view everything as a zero sum game and they will elbow you out the first chance they can get. And so those shouldn't be your peers." - Bill Gurley - GP at Benchmark

This incisive observation comes from Bill Gurley, a General Partner at Benchmark Capital, shared during his appearance on Tim Ferriss's podcast in late 2025. In the discussion titled 'Bill Gurley - Investing in the AI Era, 10 Days in China, and Important Life Lessons,' Gurley outlines two key tests for selecting peers and collaborators: trust and a shared interest in learning. He warns against those with a zero-sum mentality-individuals who see success as limited, leading them to undermine others for personal gain. Instead, he advocates pushing such people aside to foster environments of mutual support and growth.3,6

The quote resonates deeply in careers, entrepreneurship, and high-stakes fields like venture capital, where collaboration can amplify success. Gurley, drawing from decades in tech investing, emphasises that true progress thrives in positive-sum dynamics, where celebrating peers' wins benefits all.1,3

Bill Gurley's Backstory

Bill Gurley is a towering figure in Silicon Valley, renowned for his prescient investments and analytical rigour. A General Partner at Benchmark Capital since 1999, he has backed transformative companies including Uber, Airbnb, Zillow, and Grubhub, generating billions in returns. His early career included roles at Morgan Stanley and as an executive at Compaq Computers, followed by an MBA from the University of Texas and a Harvard undergraduate degree.1,2

Gurley's philosophy rejects rigid rules in favour of asymmetric upside-focusing on 'what could go right' rather than minimising losses. He famously critiques macroeconomics as a 'silly waste of time' for investors and champions products that are 'bought, not sold,' with high-quality, recurring revenue.1,2 An avid sports fan and athlete, he weaves analogies like 'muscle memory' into his insights, reminding entrepreneurs of past downturns like 1999 to build resilience.2 Beyond investing, Gurley blogs prolifically on 'Above the Crowd,' dissecting marketplaces, network effects, and economic myths, such as the fallacy of zero-sum thinking in microeconomics.5

Context of Zero-Sum Thinking in Careers and Investing

Gurley's advice counters the pervasive zero-sum worldview, where one person's gain is another's loss. He argues life and business are not zero-sum: 'Don't worry about proprietary advantage. It is not a zero-sum game.'1 Celebrate peers' accomplishments to build collaborative networks that propel collective success.1 This mindset aligns with his investment strategy, prioritising demand aggregation and true network effects over cut-throat competition.1,2

In the Tim Ferriss interview, Gurley ties this to team-building, invoking sports leaders like Sam Hinkie for disciplined, curiosity-driven cultures. He contrasts this with zero-sum actors who erode trust, essential for long-term performance across domains.3

Leading Theorists on Zero-Sum vs Positive-Sum Games

John Nash (1928-2015), the Nobel-winning mathematician behind Nash Equilibrium, revolutionised game theory. His work shows scenarios need not be zero-sum; equilibria emerge where players cooperate for mutual benefit, influencing economics, evolution, and AI strategy.

Robert Wright, in Nonzero: The Logic of Human Destiny (2000), posits history evolves towards positive-sum complexity. Trade, technology, and information sharing create interdependence, countering zero-sum tribalism-echoing Gurley's peer advice.

Yuval Noah Harari, author of Sapiens, explores how shared myths enable large-scale cooperation, turning potential zero-sum conflicts into positive-sum societies through trust and collective fictions.

Elinor Ostrom (1933-2012), Nobel economist, demonstrated via empirical studies that communities self-govern common resources without zero-sum tragedy, through trust-based rules-validating Gurley's emphasis on reliable peers.

These theorists underpin Gurley's practical wisdom: reject zero-sum peers to unlock positive-sum opportunities in careers and ventures.1,3,5

Related Insights from Bill Gurley

  • "It's called asymmetric returns. If you invest in something that doesn't work, you lose one times your money. If you miss Google, you lose 10,000 times your money."1,2
  • "Everybody has the will to win. People don't have the will to practice." (Favourite from Bobby Knight)1
  • "Truly great products are bought, not sold."1
  • "Life is a use or lose it proposition." (From partner Kevin Harvey)1

References

1. https://www.antoinebuteau.com/lessons-from-bill-gurley/

2. https://25iq.com/2016/10/14/a-half-dozen-more-things-ive-learned-from-bill-gurley-about-investing/

3. https://tim.blog/2025/12/17/bill-gurley-running-down-a-dream/

4. https://macroops.substack.com/p/the-bill-gurley-chronicles-part-i

5. https://macro-ops.com/the-bill-gurley-chronicles-an-above-the-crowd-mba-on-vcs-marketplaces-and-early-stage-investing/

6. https://www.podchemy.com/notes/840-bill-gurley-investing-in-the-ai-era-10-days-in-china-and-important-life-lessons-from-bob-dylan-jerry-seinfeld-mrbeast-and-more-06a5cd0f-d113-5200-bbc0-e9f57705fc2c

"There are people in this world who view everything as a zero sum game and they will elbow you out the first chance they can get. And so those shouldn't be your peers." - Quote: Bill Gurley

‌

‌

Quote: Andrew Ng - AI guru, Coursera founder

"My most productive developers are actually not fresh college grads; they have 10, 20 years of experience in coding and are on top of AI... one tier down... is the fresh college grads that really know how to use AI... one tier down from that is the people with 10 years of experience... the least productive that I would never hire are the fresh college grads that... do not know AI." - Andrew Ng - AI guru, Coursera founder

In a candid discussion at the World Economic Forum 2026 in Davos, Andrew Ng unveiled a provocative hierarchy of developer productivity, prioritising AI fluency over traditional experience. Delivered during the session 'Corporate Ladders, AI Reshuffled,' this perspective challenges conventional hiring norms amid AI's rapid evolution. Ng's remarks, captured in a live YouTube panel on 19 January 2026, underscore how artificial intelligence is redefining competence in software engineering.

Andrew Ng: The Architect of Modern AI Education

Andrew Ng stands as one of the foremost pioneers in artificial intelligence, blending academic rigour with entrepreneurial vision. A British-born computer scientist, he earned his PhD from the University of California, Berkeley, and later joined Stanford University, where he co-founded the Stanford AI Lab. Ng's breakthrough came with his development of one of the first large-scale online courses on machine learning in 2011, which attracted over 100,000 students and laid the groundwork for massive open online courses (MOOCs).

In 2012, alongside Daphne Koller, he co-founded Coursera, transforming global access to education by partnering with top universities to offer courses in AI, data science, and beyond. The platform now serves millions, democratising skills essential for the AI age. Ng also led Baidu's AI Group as Chief Scientist from 2014 to 2017, scaling deep learning applications at an industrial level. Today, as founder of DeepLearning.AI and managing general partner at AI Fund, he invests in and educates on practical AI deployment. His influence extends to Google Brain, which he co-founded in 2011, pioneering advancements in deep learning that power today's generative models.

Ng's Davos appearances, including 2026 interviews with Moneycontrol and others, consistently advocate for AI optimism tempered by pragmatism. He dismisses fears of an AI bubble in applications while cautioning on model training costs, and stresses upskilling: 'A person that uses AI will be so much more productive, they will replace someone that doesn't use AI.'1,3

Context of the Quote: AI's Disruption of Corporate Ladders

The quote emerged from WEF 2026's exploration of how AI reshuffles organisational hierarchies and talent pipelines. Ng argued that AI tools amplify human capabilities unevenly, creating a new productivity spectrum. Seasoned coders who master AI-such as large language models for code generation-outpace novices, while AI-illiterate veterans lag. This aligns with his broader Davos narrative: AI handles 30-40% of many jobs' tasks, leaving humans to focus on the rest, but only if they adapt.3

Ng highlighted real-world shifts in Silicon Valley, where AI inference demand surges, throttling teams due to capacity limits. He urged infrastructure build-out and open-source adoption, particularly for nations like India, warning against vendor lock-in: 'If it's open, no one can mess with it.'2 Fears of mass job losses? Overhyped, per Ng-layoffs stem more from post-pandemic corrections than automation.3

Leading Theorists on AI, Skills, and Future Work

Ng's views echo and extend seminal theories on technological unemployment and skill augmentation.

  • David Autor: MIT economist whose 'skill-biased technological change' framework (1990s onwards) posits automation displaces routine tasks but boosts demand for non-routine cognitive skills. Ng's hierarchy mirrors this: AI supercharges experienced workers' judgement while sidelining routine coders.3
  • Erik Brynjolfsson and Andrew McAfee: In 'The Second Machine Age' (2014), they describe how digital technologies widen productivity gaps, favouring 'superstars' who leverage tools. Ng's top tier-AI-savvy veterans-embodies this 'winner-takes-more' dynamic in coding.1
  • Daron Acemoglu and Pascual Restrepo: Their 'task-based' model (2010s) quantifies automation's impact: AI automates coding subtasks, but complements human oversight. Ng's 30-40% task automation estimate directly invokes this, predicting productivity booms for adapters.3
  • Fei-Fei Li: Ng's Stanford colleague and 'Godmother of AI Vision,' she emphasises human-AI collaboration. Her work on multimodal AI reinforces Ng's call for developers to integrate AI into workflows, not replace manual toil.
  • Yann LeCun, Geoffrey Hinton, and Yoshua Bengio: The 'Godfathers of Deep Learning' (Turing Award 2018) enabled tools like those Ng champions. Their foundational neural network advances underpin modern code assistants, validating Ng's tiers where AI fluency trumps raw experience.

These theorists collectively frame AI as an amplifier, not annihilator, of labour-resonating with Ng's prescription for careers: master AI or risk obsolescence. As workflows agenticise, coding evolves from syntax drudgery to strategic orchestration.

Implications for Careers and Skills

Ng's ladder demands immediate action: prioritise AI literacy via platforms like Coursera, fine-tune open models like Llama-4 or Qwen-2, and rebuild talent pipelines around meta-skills like prompt engineering and bias auditing.2,5 For IT powerhouses like India's $280 billion services sector, upskilling velocity is non-negotiable.6 In this reshuffled landscape, productivity hinges not on years coded, but on AI mastery.

References

1. https://www.moneycontrol.com/news/business/davos-summit/davos-2026-are-we-in-an-ai-bubble-andrew-ng-says-it-depends-on-where-you-look-13779435.html

2. https://www.aicerts.ai/news/andrew-ng-open-source-ai-india-call-resonates-at-davos/

3. https://www.storyboard18.com/brand-makers/davos-2026-andrew-ng-says-fears-of-ai-driven-job-losses-are-exaggerated-87874.htm

4. https://www.youtube.com/watch?v=oQ9DTjyfIq8

5. https://globaladvisors.biz/2026/01/23/the-ai-signal-from-the-world-economic-forum-2026-at-davos/

6. https://economictimes.com/tech/artificial-intelligence/india-must-speed-up-ai-upskilling-coursera-cofounder-andrew-ng/articleshow/126703083.cms

"My most productive developers are actually not fresh college grads; they have 10, 20 years of experience in coding and are on top of AI... one tier down... is the fresh college grads that really know how to use AI... one tier down from that is the people with 10 years of experience... the least productive that I would never hire are the fresh college grads that... do not know AI." - Quote: Andrew Ng - AI guru, Coursera founder

‌

‌

Term: Read the room

"To read the room means to assess and understand the collective mood, attitudes, or dynamics of a group of people and adjust your behavior or communication accordingly." - Read the room

"To read the room" means to assess and understand the collective mood, attitudes, or dynamics of a group of people in a particular setting, and to adjust one's behaviour or communication accordingly1,3. This idiom emphasises emotional intelligence, enabling individuals to gauge the emotions, thoughts, and reactions of others through nonverbal cues, body language, and the overall atmosphere2,4.

Originating from informal English usage, the phrase is commonly applied in social, professional, and online contexts. For instance, a dinner party host might "read the room" to determine if guests are enjoying themselves or tiring, deciding whether to open another bottle of wine1. In meetings or video calls, it involves analysing general mood to adapt presentations, as visibility of only shoulders and faces can make this challenging1. Sales professionals use it to pick up nonverbal cues during pitches3,4, while social media users are advised to "read the room" before posting to avoid backlash, as seen in Kylie Jenner's 2021 GoFundMe post that appeared tone-deaf amid economic hardship2.

Key Contexts and Applications

  • Workplace and Meetings: Essential for effective communication; teachers "read the room" to avoid boring students, salespeople adjust pitches if the audience seems worried4.
  • Social Settings: Prevents missteps like telling jokes in a serious atmosphere, which is a classic "failure to read the room"4.
  • Online and Public Communication: Involves anticipating audience reactions to posts or statements for maximum engagement and minimal controversy2.

The skill relies on observing body language-such as foot direction or shoulder positioning-and intuition to interpret the prevailing vibe4. It enhances interpersonal reactions and is crucial for authentic, context-sensitive interactions2.

Best Related Strategy Theorist: Daniel Goleman

Daniel Goleman, a pioneering psychologist and science journalist, is the foremost theorist linked to "read the room" through his development of **emotional intelligence (EI)**, the core ability underpinning this idiom. Goleman popularised EI in his seminal 1995 book Emotional Intelligence: Why It Can Matter More Than IQ, arguing that EI-encompassing self-awareness, self-regulation, motivation, empathy, and social skills-often predicts success more than traditional IQ[supplied knowledge].

Born in 1946 in Stockton, California, Goleman earned a PhD in psychology from Harvard University in 1971, specialising in meditation and brain science. His early career as a New York Times science reporter (1972-1996) covered behavioural and brain sciences, leading to books like Vital Lies, Simple Truths (1985). Goleman's relationship to "read the room" stems directly from EI's social awareness component, particularly empathy and organisational awareness-skills for reading group emotions and dynamics to influence effectively[supplied knowledge]. He describes this as "reading the room" in leadership contexts, applying it to executives who attune to team moods for better decision-making.

Goleman's work with the Hay Group (now Korn Ferry) developed EI assessments used in corporate training, reinforcing practical strategies for communication and behaviour adjustment. His biography reflects a blend of research and application: influenced by mindfulness studies in India during the 1970s, he bridged Eastern practices with Western psychology. Later books like Primal Leadership (2002, co-authored) apply EI to leadership, explicitly linking it to sensing group climates-a direct parallel to the term[supplied knowledge]. Goleman's theories provide the scientific foundation for "reading the room" as a strategic tool in business, education, and personal interactions.

References

1. https://plainenglish.com/lingo/read-the-room/

2. https://1832communications.com/blog/read-room/

3. https://dictionary.cambridge.org/us/dictionary/english/read-the-room

4. https://www.youtube.com/watch?v=cRRlG39TKEA

"To read the room means to assess and understand the collective mood, attitudes, or dynamics of a group of people and adjust your behavior or communication accordingly." - Term: Read the room

‌

‌

Quote: Microsoft

"DeepSeek's success reflects growing Chinese momentum across Africa, a trend that may continue to accelerate in 2026." - Microsoft - January 2026

The quote originates from Microsoft's Global AI Adoption in 2025 report, published by the company's AI Economy Institute and detailed in a January 2026 blog post on 'On the Issues'. It highlights the rapid ascent of DeepSeek, a Chinese open-source AI platform, in African markets. Microsoft notes that DeepSeek's free access and strategic partnerships have driven adoption rates 2 to 4 times higher in Africa than in other regions, positioning it as a key factor in China's expanding technological influence.4,5

Backstory on the Source: Microsoft's Perspective

Microsoft, a global technology leader with deep investments in AI through partnerships like OpenAI, tracks worldwide AI diffusion to inform its strategy. The 2025 report analyses user data across countries, revealing how accessibility shapes adoption. While Microsoft acknowledges its stake in broader AI proliferation, the analysis remains data-driven, emphasising DeepSeek's role in underserved markets without endorsing geopolitical shifts.1,2,4

DeepSeek holds significant market shares in Africa: 16-20% in Ethiopia, Tunisia, Malawi, Zimbabwe, and Madagascar; 11-14% in Uganda and Niger. This contrasts with low uptake in North America and Europe, where Western models dominate.1,2,3

DeepSeek: The Chinese AI Challenger

Founded in 2023, DeepSeek is a Hangzhou-based startup rivalling OpenAI's ChatGPT with cost-effective, open-source models under an MIT licence. Its free chatbot eliminates barriers like subscription fees or credit cards, appealing to price-sensitive regions. The January 2025 release of its R1 model, praised in Nature as a 'landmark paper' co-authored by founder Liang Wenfeng, demonstrated advanced reasoning for math and coding at lower costs.2,4

Strategic distribution via Huawei phones as default chatbots, plus partnerships and telecom integrations, propelled its growth. Adoption peaks in China (89%), Russia (43%), Belarus (56%), Cuba (49%), Iran (25%), and Syria (23%). Microsoft warns this could serve as a 'geopolitical instrument' for Chinese influence where US services face restrictions.2,3,4

Broader Implications for Africa and the Global South

Africa's AI uptake accelerates via free platforms like DeepSeek, potentially onboarding the 'next billion users' from the global South. Factors include Huawei's infrastructure push and awareness campaigns. However, concerns arise over biases, such as restricted political content aligned with Chinese internet access, and security risks prompting bans in the US, Australia, Germany, and even Microsoft internally.1,2

Leading Theorists on AI Geopolitics and Global Adoption

  • Lavista Ferres (Microsoft AI researcher): Leads the lab behind the report. Observes DeepSeek's technical strengths but notes political divergences, predicting influence on global discourse.2
  • Liang Wenfeng (DeepSeek founder): Drives open-source innovation, authoring peer-reviewed work on efficient AI models that challenge US dominance.2
  • Walid Kéfi (AI commentator): Analyses Africa's generative AI surge, crediting free platforms for scaling adoption amid infrastructure challenges.1

These insights underscore a pivotal shift: AI's future hinges on openness and accessibility, reshaping power dynamics between US and Chinese ecosystems.4

References

1. https://www.ecofinagency.com/news/1301-51867-microsoft-study-maps-africa-s-generative-ai-uptake-as-free-platforms-drive-adoption

2. https://abcnews.go.com/Technology/wireStory/deepseeks-ai-gains-traction-developing-nations-microsoft-report-129021507

3. https://www.euronews.com/next/2026/01/09/deepseeks-ai-gains-traction-in-developing-nations-microsoft-report-says

4. https://www.microsoft.com/en-us/corporate-responsibility/topics/ai-economy-institute/reports/global-ai-adoption-2025/

5. https://blogs.microsoft.com/on-the-issues/2026/01/08/global-ai-adoption-in-2025/

6. https://www.cryptopolitan.com/microsoft-says-china-beating-america-in-ai/

“DeepSeek’s success reflects growing Chinese momentum across Africa, a trend that may continue to accelerate in 2026.” - Quote: Microsoft

‌

‌

Quote: Andrew Ng - AI guru, Coursera founder

"I think one of the challenges is, because AI technology is still evolving rapidly, the skills that are going to be needed in the future are not yet clear today. It depends on lifelong learning." - Andrew Ng - AI guru, Coursera founder

Delivered during a session on Corporate Ladders, AI Reshuffled at the World Economic Forum in Davos in January 2026, this insight from Andrew Ng captures the essence of navigating an era where artificial intelligence advances at breakneck speed. Ng's words underscore a pivotal shift: as AI reshapes jobs and workflows, the uncertainty of future skills demands a commitment to continuous adaptation1,2.

Andrew Ng: The Architect of Modern AI Education

Andrew Ng stands as one of the foremost figures in artificial intelligence, often dubbed an AI guru for his pioneering contributions to machine learning and online education. A British-born computer scientist, Ng co-founded Coursera in 2012, revolutionising access to higher education by partnering with top universities to offer massive open online courses (MOOCs). His platforms, including DeepLearning.AI and Landing AI, have democratised AI skills, training millions worldwide2,3.

Ng's career trajectory is marked by landmark roles: he led the Google Brain project, which advanced deep learning at scale, and served as chief scientist at Baidu, applying AI to real-world applications in search and autonomous driving. As managing general partner at AI Fund, he invests in startups bridging AI with practical domains. At Davos 2026, Ng addressed fears of AI-driven job losses, arguing they are overstated. He broke jobs into tasks, noting AI handles only 30-40% currently, boosting productivity for those who adapt: 'A person that uses AI will be so much more productive, they will replace someone that doesn't use AI'2,3. His emphasis on coding as a 'durable skill'-not for becoming engineers, but for building personalised software to automate workflows-aligns directly with the quoted challenge of unclear future skills1.

The Broader Context: AI's Impact on Jobs and Skills at Davos 2026

The quote emerged amid Davos discussions on agentic AI systems-autonomous agents managing end-to-end workflows-pushing humans towards oversight, judgement, and accountability. Ng highlighted meta-cognitive agility: shifting from perishable technical skills to 'learning to learn'1. This resonates with global concerns; IMF's Kristalina Georgieva noted one in ten jobs in advanced economies already need new skills, with labour markets unprepared1. Ng urged upskilling, especially for regions like India, warning its IT services sector risks disruption without rapid AI literacy3,5.

Corporate strategies are evolving: the T-shaped model promotes AI literacy across functions (breadth) paired with irreplaceable domain expertise (depth). Firms rebuild talent ladders, replacing grunt work with AI-supported apprenticeships fostering early decision-making1. Ng's optimism tempers hype; AI improves incrementally, not in dramatic leaps, yet demands proactive reskilling3.

Leading Theorists Shaping AI, Skills, and Lifelong Learning

Ng's views build on foundational theorists in AI and labour economics:

  • Geoffrey Hinton, Yann LeCun, and Yoshua Bengio (the 'Godfathers of AI'): Pioneered deep learning, enabling today's breakthroughs. Hinton, Ng's early collaborator at Google Brain, warns of AI risks but affirms its transformative potential for productivity2. Their work underpins Ng's task-based job analysis.
  • Erik Brynjolfsson and Andrew McAfee (MIT): In 'The Second Machine Age', they theorise how digital technologies complement human skills, amplifying 'non-routine' cognitive tasks. This mirrors Ng's productivity shift, where AI augments rather than replaces1,2.
  • Carl Benedikt Frey and Michael Osborne (Oxford): Their 2013 study quantified automation risks for 702 occupations, sparking debates on reskilling. Ng extends this by focusing on partial automation (30-40%) and lifelong learning imperatives2.
  • Daron Acemoglu (MIT): Critiques automation's wage-polarising effects, advocating 'so-so technologies' that automate mid-skill tasks. Ng counters with optimism for human-AI collaboration via upskilling3.

These theorists converge on a consensus: AI disrupts routines but elevates human judgement, creativity, and adaptability-skills honed through lifelong learning, as Ng advocates.

Ng's prescience positions this quote as a clarion call for individuals and organisations to embrace uncertainty through perpetual growth in an AI-driven world.

References

1. https://globaladvisors.biz/2026/01/23/the-ai-signal-from-the-world-economic-forum-2026-at-davos/

2. https://www.storyboard18.com/brand-makers/davos-2026-andrew-ng-says-fears-of-ai-driven-job-losses-are-exaggerated-87874.htm

3. https://www.moneycontrol.com/news/business/davos-summit/davos-2026-ai-is-continuously-improving-despite-perception-that-excitement-has-faded-says-andrew-ng-13780763.html

4. https://www.aicerts.ai/news/andrew-ng-open-source-ai-india-call-resonates-at-davos/

5. https://economictimes.com/tech/artificial-intelligence/india-must-speed-up-ai-upskilling-coursera-cofounder-andrew-ng/articleshow/126703083.cms

"I think one of the challenges is, because AI technology is still evolving rapidly, the skills that are going to be needed in the future are not yet clear today. It depends on lifelong learning." - Quote: Andrew Ng - AI guru. Coursera founder

‌

‌

Term: Steelman argument

"A steelman argument is a dialectical technique where you restate an opponent's position in its strongest, most charitable, and most convincing form, even better than they presented it, before you offer your counterargument, aiming to understand the truth and engage." - Steelman argument

The purpose is not to score rhetorical points, but to understand the underlying truth of the issue, test your own beliefs, and engage respectfully and productively with those who disagree.

In a steelman argument, a participant in a discussion:

  • Listens carefully to the other side's position, reasons, evidence, and concerns.
  • Reconstructs that position as logically, factually, and rhetorically strong as possible, eliminating obvious errors, clarifying ambiguities, and adding reasonable supporting considerations.
  • Checks this reconstruction with the opponent to ensure it is both accurate and recognisable - ideally something they would endorse or even prefer to their original wording.
  • Only then advances their own critique, counterarguments, or alternative proposals, addressing this improved version rather than a weaker one.

This makes steelmanning the conceptual opposite of the straw man fallacy, where a position is caricatured or simplified to make it easier to attack. Where a straw man trades on distortion to make disagreement easier, a steelman trades on fairness and intellectual generosity to make understanding deeper.

Core principles of steelmanning

Four principles underpin effective steelman arguments:

  • Charity - You interpret your counterpart's words in the most reasonable light, attributing to them the most coherent and defensible version of their view, rather than assuming confusion, bad faith, or ignorance.
  • Accuracy - You preserve the core commitments, values, and intended meaning of their position; you do not quietly change what is at stake, even while you improve its structure and support.
  • Strengthening - You explicitly look for the best reasons, analogies, and evidence that could support their view, including arguments they have not yet articulated but would plausibly accept.
  • Verification - You invite your interlocutor to confirm or refine your restatement, aiming for the moment when they can honestly say, "Yes, that is what I mean - and that is an even better version of my view than I initially gave."

Steelman vs. straw man vs. related techniques

Concept What it does Typical intention
Steelman argument Strengthens and clarifies the opposing view before critiquing it. Seek truth, understand deeply, and persuade through fairness.
Straw man fallacy Misrepresents or oversimplifies a view to make it easier to refute. Win a debate, create rhetorical advantage, or avoid hard questions.
Devil's advocate Adopts a contrary position (not necessarily sincerely held) to expose weaknesses or overlooked risks. Stress-test prevailing assumptions, foster critical thinking.
Thought experiment / counterfactual Explores hypothetical scenarios to test principles or intuitions. Clarify implications, reveal hidden assumptions, probe edge cases.

Steelman arguments often incorporate elements of counterfactuals and thought experiments. For example, to strengthen a policy criticism, you might ask: "Suppose this policy were applied in a more extreme case - would the same concerns still hold?" You then build the best version of the concern across such scenarios before responding.

Why steelmanning matters in strategy and decision-making

In strategic analysis, investing, policy design, and complex organisational decisions, steelman arguments help to:

  • Reduce confirmation bias by forcing you to internalise the strongest objections to your preferred view.
  • Improve risk management by properly articulating downside scenarios and adverse stakeholder perspectives before discarding them.
  • Enhance credibility with boards, clients, and teams, who see that arguments have been tested against serious, not superficial, opposition.
  • Strengthen strategy by making sure that chosen options have survived comparison with the most powerful alternatives, not just weakly framed ones.

When used rigorously, the steelman discipline often turns a confrontational debate into a form of collaborative problem-solving, where each side helps the other refine their views and the final outcome is more robust than either starting position.

Practical steps to construct a steelman argument

A practical steelmanning process in a meeting, negotiation, or analytical setting might look like this:

  • 1. Elicit and clarify
    Ask the other party to explain their view fully. Use probing but neutral questions: "What is the central concern?", "What outcomes are you trying to avoid?", "What evidence most strongly supports your view?"
  • 2. Map and organise
    Identify their main claims, supporting reasons, implicit assumptions, and key examples. Group these into a coherent structure, ranking the arguments from strongest to weakest.
  • 3. Strengthen
    Add reasonable premises they may have missed, improve their examples, and fill gaps with the best available data or analogies that genuinely support their position.
  • 4. Restate back
    Present your reconstructed version, starting with a phrase such as, "Let me try to state your view as strongly as I can." Invite correction until they endorse it.
  • 5. Engage and test
    Only once agreement on the steelman is reached do you introduce counterarguments, alternative hypotheses, or different scenarios - always addressing the strong version rather than retreating to weaker caricatures.

Best related strategy theorist: John Stuart Mill

Although the term "steelman" is modern, the deepest intellectual justification for the practice in strategy, policy, and public reasoning comes from the nineteenth-century philosopher and political economist John Stuart Mill. His work provides a powerful conceptual foundation for steelmanning, especially in high-stakes decision contexts.

Mill's connection to steelmanning

Mill argued that you cannot truly know your own position unless you also understand, in its most persuasive form, the best arguments for the opposing side. He insisted that anyone who only hears or articulates one side of a case holds their opinion as a "prejudice" rather than a reasoned view. In modern terms, he is effectively demanding that responsible thinkers and decision-makers steelman their opponents before settling on a conclusion.

In his work on liberty, representative government, and political economy, Mill repeatedly:

  • Reconstructed opposing positions in detail, often giving them more systematic support than their own advocates had provided.
  • Explored counterfactual scenarios and hypotheticals to see where each argument would succeed or fail.
  • Treated thoughtful critics as partners in the search for truth rather than as enemies to be defeated.

This method aligns closely with the steelman ethos in modern strategy work: before committing to a policy, investment, or organisational move, you owe it to yourself and your stakeholders to understand the most credible case against your intended path - not a caricature of it.

Biography and intellectual context

John Stuart Mill (1806 - 1873) was an English philosopher, economist, and civil servant, widely regarded as one of the most influential thinkers in the liberal tradition. Educated intensively from a very young age by his father, James Mill, under the influence of Jeremy Bentham, he mastered classical languages, logic, and political economy in his childhood, but suffered a mental crisis in his early twenties that led him to broaden his outlook beyond strict utilitarianism.

Mill's major works include:

  • System of Logic, where he analysed how we form and test hypotheses, including the role of competing explanations.
  • On Liberty, which defended freedom of thought, speech, and experimentation in ways that presuppose an active culture of hearing and strengthening opposing views.
  • Principles of Political Economy, a major text that carefully considers economic arguments from multiple sides before reaching policy conclusions.

As a senior official in the East India Company and later a Member of Parliament, Mill moved between theory and practice, applying his analytical methods to real-world questions of governance, representation, and reform. His insistence that truth and sound policy emerge only from confronting the strongest counter-arguments is a direct ancestor of the modern steelman method in strategic reasoning, board-level debate, and public policy design.

Mill's legacy for modern strategic steelmanning

For contemporary strategists, investors, and leaders, Mill's legacy can be summarised as a disciplined demand: before acting, ensure that you could state the best good-faith case against your intention more clearly and powerfully than its own advocates. Only then is your subsequent decision genuinely informed rather than insulated by bias.

In this way, John Stuart Mill stands as the key historical theorist behind the steelman argument - not for coining the term, but for articulating the intellectual and ethical duty to engage with opponents at their strongest, in pursuit of truth and resilient strategy.

References

1. https://aliabdaal.com/newsletter/the-steelman-argument/

2. https://themindcollection.com/steelmanning-how-to-discover-the-truth-by-helping-your-opponent/

3. https://ratiochristi.org/the-anatomy-of-persuasion-the-steel-man/

4. https://www.youtube.com/watch?v=veeGKTzbYjc

5. https://simplicable.com/en/steel-man

6. https://umbrex.com/resources/tools-for-thinking/what-is-steelmanning/

"A steelman argument is a dialectical technique where you restate an opponent's position in its strongest, most charitable, and most convincing form, even better than they presented it, before you offer your counterargument, aiming to understand the truth and engage." - Term: Steelman argument

‌

‌

Quote: Professor Hannah Fry - University of Cambridge

"Humans are not very good at exponentials. And right now, at this moment, we are standing right on the bend of the curve. AGI is not a distant thought experiment anymore." - Professor Hannah Fry - Univeristy of Cambridge

The quote comes at the end of a wide?ranging conversation between applied mathematician and broadcaster Professor Hannah Fry and DeepMind co?founder Shane Legg, recorded for the “Google DeepMind, the podcast” series in late 2025. Fry is reflecting on Legg’s decades?long insistence that artificial general intelligence would arrive much sooner than most experts expected, and on his argument that its impact will be structurally comparable to the Industrial Revolution: a technology that reshapes work, wealth, and the basic organisation of society rather than just adding another digital tool. Her remark that “humans are not very good at exponentials” is a pointed reminder of how easily people misread compounding processes, from pandemics to technological progress, and therefore underestimate how quickly “next decade” scenarios can become “this quarter” realities.?

Context of the quote

Fry’s line follows a discussion in which Legg lays out a stepwise picture of AI progress: from today’s uneven but impressive systems, through “minimal AGI” that can reliably perform the full range of ordinary human cognitive tasks, to “full AGI” capable of the most exceptional creative and scientific feats, and then on to artificial superintelligence that eclipses human capability in most domains. Throughout, Legg stresses that current models already exceed humans in language coverage, encyclopaedic knowledge and some kinds of problem solving, while still failing at basic visual reasoning, continual learning, and robust commonsense. The trajectory he sketches is not a gentle slope but a sharpening curve, driven by scaling laws, data, architectures and hardware; Fry’s “bend of the curve” image captures the moment when such a curve stops looking linear to human intuition and starts to feel suddenly, uncomfortably steep.?

That curve is not just about raw capability but about diffusion into the economy. Legg argues that over the next few years, AI will move from being a helpful assistant to doing a growing share of economically valuable work—starting with software engineering and other high?paid cognitive roles that can be done entirely through a laptop. He anticipates that tasks once requiring a hundred engineers might soon be done by a small team amplified by advanced AI tools, with similarly uneven but profound effects across law, finance, research, and other knowledge professions. By the time Fry delivers her closing reflection, the conversation has moved from technical definitions to questions of social contract: how to design a post?AGI economy, how to distribute the gains from machine intelligence, and how to manage the transition period in which disruption and opportunity coexist.?

Hannah Fry: person and perspective

Hannah Fry is a professor in the mathematics of cities who has built a public career explaining complex systems—epidemics, finance, urban dynamics and now AI—to broad audiences. Her training in applied mathematics and complexity science has made her acutely aware of how exponential processes play out in the real world, from contagion curves during COVID?19 to the compounding effect of small percentage gains in algorithmic performance and hardware efficiency. She has repeatedly highlighted the cognitive bias that leads people to underreact when growth is slow and overreact when it becomes visibly explosive, a theme she explicitly connects in this podcast to the early days of the pandemic, when warnings about exponential infection growth were largely ignored while life carried on as normal.?

In the AGI conversation, Fry positions herself as an interpreter between technical insiders and a lay audience that is already experiencing AI in everyday tools but may not yet grasp the systemic implications. Her remark that the general public may, in some sense, “get it” better than domain specialists echoes Legg’s observation that non?experts sometimes see current systems as already effectively “intelligent,” while many professionals in affected fields downplay the relevance of AI to their own work. When she says “AGI is not a distant thought experiment anymore,” she is distilling Legg’s timelines—his long?standing 50/50 prediction of minimal AGI by 2028, followed by full AGI within a decade—into a single, accessible warning that the window for slow institutional adaptation is closing.?

Meaning of “not very good at exponentials”

The specific phrase “humans are not very good at exponentials” draws on a familiar insight from behavioural economics and cognitive psychology: people routinely misjudge exponential growth, treating it as if it were linear. During the COVID?19 pandemic, this manifested in the gap between early warnings about exponential case growth and the public’s continued attendance at large events right up until visible crisis hit, an analogy Fry explicitly invokes in the episode. In technology, the same bias leads organisations to plan as if next year will look like this year plus a small increment, even when underlying drivers—compute, algorithmic innovation, investment, data availability—are compounding at rates that double capabilities over very short horizons.?

Fry’s “bend of the curve” language marks the point where incremental improvements accumulate to the point that qualitative change becomes hard to ignore: AI systems not only answering questions but autonomously writing production code, conducting literature reviews, proposing experiments, or acting as agents in the world. At that bend, the lag between capability and governance becomes a central concern; Legg emphasises that there will not be enough time for leisurely consensus?building once AGI is fully realised, hence his call for every academic discipline and sector—law, education, medicine, city planning, economics—to begin serious scenario work now. Fry’s closing comment translates that call into a general admonition: exponential technologies demand anticipatory thinking, not reactive crisis management.?

Leading theorists behind the ideas

The intellectual backdrop to Fry’s quote and Legg’s perspectives on AGI blends several strands of work in AI theory, safety and the study of technological revolutions.

  • Shane Legg and Ben Goertzel helped revive and popularise the term “artificial general intelligence” in the early 2000s to distinguish systems aimed at broad, human?like cognitive competence from “narrow AI” optimised for specific tasks. Legg’s own academic work, influenced by his supervisor Marcus Hutter, explores formal definitions of universal intelligence and the conditions under which machine systems could match or exceed human problem?solving across many domains.?

  • I. J. Good introduced the “intelligence explosion” hypothesis in 1965, arguing that a sufficiently advanced machine intelligence capable of improving its own design could trigger a runaway feedback loop of ever?greater capability. This notion of recursive self?improvement underpins much of the contemporary discourse about AI timelines and the risks associated with crossing particular capability thresholds.?

  • Eliezer Yudkowsky developed thought experiments and early arguments about AGI’s existential risks, emphasising that misaligned superintelligence could be catastrophically dangerous even if human developers never intended harm. His writing helped seed the modern AI safety movement and influenced researchers and entrepreneurs who later entered mainstream organisations.?

  • Nick Bostrom synthesised and formalised many of these ideas in “Superintelligence: Paths, Dangers, Strategies,” providing widely cited scenarios in which AGI rapidly transitions into systems whose goals and optimisation power outstrip human control. Bostrom’s work is central to Legg’s concern with how to steer AGI safely once it surpasses human intelligence, especially around questions of alignment, control and long?term societal impact.?

  • Geoffrey Hinton, Stuart Russell and other AI pioneers have added their own warnings in recent years: Hinton has drawn parallels between AI and other technologies whose potential harms were recognized only after wide deployment, while Russell has argued for a re?founding of AI as the science of beneficial machines explicitly designed to be uncertain about human preferences. Their perspectives reinforce Legg’s view that questions of ethics, interpretability and “System 2 safety”—ensuring that advanced systems can reason transparently about moral trade?offs—are not peripheral but central to responsible AGI development.?

Together, these theorists frame AGI as both a continuation of a long scientific project to build thinking machines and as a discontinuity in human history whose effects will compound faster than our default intuitions allow. In that context, Fry’s quote reads less as a rhetorical flourish and more as a condensed thesis: exponential dynamics in intelligence technologies are colliding with human cognitive biases and institutional inertia, and the moment to treat AGI as a practical, near?term design problem rather than a speculative future is now.?

References

https://eeg.cl.cam.ac.uk
https://en.wikipedia.org/wiki/Shane_Legg
https://www.youtube.com/watch?v=kMUdrUP-QCs
https://www.ibm.com/think/topics/artificial-general-intelligence
https://kingy.ai/blog/exploring-the-concept-of-artificial-general-intelligence-agi/
https://jetpress.org/v25.2/goertzel.pdf
https://www.dce.va/content/dam/dce/resources/en/digital-cultures/Encountering-AI---Ethical-and-Anthropological-Investigations.pdf
https://arxiv.org/pdf/1707.08476.pdf
https://hermathsstory.eu/author/admin/page/7/
https://www.shunryugarvey.com/wp-content/uploads/2021/03/YISR_I_46_1-2_TEXT_P-1.pdf
https://dash.harvard.edu/bitstream/handle/1/37368915/Nina%20Begus%20Dissertation%20DAC.pdf?sequence=1&isAllowed=y
https://www.facebook.com/groups/lifeboatfoundation/posts/10162407288283455/
https://globaldashboard.org/economics-and-development/
https://www.forbes.com/sites/gilpress/2024/03/29/artificial-general-intelligence-or-agi-a-very-short-history/
https://ebe.uct.ac.za/sites/default/files/content_migration/ebe_uct_ac_za/169/files/WEB%2520UCT%2520CHEM%2520D023%2520Centenary%2520Design.pdf

 

"Humans are not very good at exponentials. And right now, at this moment, we are standing right on the bend of the curve. AGI is not a distant thought experiment anymore." - Quote: Professor Hannah Fry

‌

‌

Quote: Andrew Ng - AI guru, Coursera founder

"There's one skill that is already emerging... it's time to get everyone to learn to code.... not just the software engineers, but the marketers, HR professionals, financial analysts, and so on - the ones that know how to code are much more productive than the ones that don't, and that gap is growing." - Andrew Ng - AI guru, Coursera founder

In a forward-looking discussion at the World Economic Forum's 2026 session on 'Corporate Ladders, AI Reshuffled', Andrew Ng passionately advocates for coding as the pivotal skill defining productivity in the AI era. Delivered in January 2026, this insight underscores how AI tools are democratising coding, enabling professionals beyond software engineering to harness technology for greater efficiency1. Ng's message aligns with his longstanding mission to make advanced technology accessible through education and practical application.

Who is Andrew Ng?

Andrew Ng stands as one of the foremost figures in artificial intelligence, renowned for bridging academia, industry, and education. A British-born computer scientist, he earned his PhD from the University of California, Berkeley, and has held prestigious roles including adjunct professor at Stanford University. Ng co-founded Coursera in 2012, revolutionising online learning by offering courses to millions worldwide, including his seminal 'Machine Learning' course that has educated over 4 million learners. He led Google Brain, Google's deep learning research project, from 2011 to 2014, pioneering applications that advanced AI capabilities across industries. Currently, as founder of Landing AI and DeepLearning.AI, Ng focuses on enterprise AI solutions and accessible education platforms. His influence extends to executive positions at Baidu and as a venture capitalist investing in AI startups1,2.

Context of the Quote

The quote emerges from Ng's reflections on AI's transformative impact on workflows, particularly at the WEF 2026 event addressing how AI reshuffles corporate structures. Here, Ng highlights 'vibe coding'-AI-assisted coding that lowers barriers, allowing non-engineers like marketers, HR professionals, and financial analysts to prototype ideas rapidly without traditional hand-coding. He argues this boosts productivity and creativity, warning that the divide between coders and non-coders will widen. Recent talks, such as at Snowflake's Build conference, reinforce this: 'The bar to coding is now lower than it ever has been. People that code... will really get more done'1. Ng critiques academia for lagging behind, noting unemployment among computer science graduates due to outdated curricula ignoring AI tools, and stresses industry demand for AI-savvy talent1,2.

Leading Theorists and the Broader Field

Ng's advocacy builds on foundational AI theories while addressing practical upskilling. Pioneers like Geoffrey Hinton, often called the 'Godfather of Deep Learning', laid groundwork through backpropagation and neural networks, influencing Ng's Google Brain work. Hinton, Ng's former advisor at Stanford, warns of AI's job displacement risks but endorses human-AI collaboration. Yann LeCun, Meta's Chief AI Scientist, complements this with convolutional neural networks essential for computer vision, emphasising open-source AI for broad adoption. Fei-Fei Li, 'Godmother of AI', advanced image recognition and co-directs Stanford's Human-Centered AI Institute, aligning with Ng's educational focus.

In skills discourse, World Economic Forum's Future of Jobs Report 2025 projects technological skills, led by AI and big data, as fastest-growing in importance through 2030, alongside lifelong learning3. Microsoft CEO Satya Nadella echoes: 'AI won't replace developers, but developers who use AI will replace those who don't'3. Nvidia's Jensen Huang and Klarna's Sebastian Siemiatkowski advocate AI agents and tools like Cursor, predicting hybrid human-AI teams1. Ng's tips-take AI courses, build systems hands-on, read papers-address a talent crunch where 51% of tech leaders struggle to find AI skills2.

Implications for Careers and Workflows

  • AI-Assisted Coding: Tools like GitHub Copilot, Cursor, and Replit enable 'agentic development', delegating routine tasks to AI while humans focus on creativity1,3.
  • Universal Upskilling: Ng urges structured learning via platforms like Coursera, followed by practice, as theory alone insufficient-like studying aeroplanes without flying2.
  • Industry Shifts: Companies like Visa and DoorDash now require AI code generator experience; polyglot programming (Python, Rust) and prompt engineering rise1,3.
  • Warnings: Despite optimism, experts like Stuart Russell caution AI could disrupt 80% of jobs, underscoring adaptive skills2.

Ng's vision positions coding not as a technical niche but a universal lever for productivity in an AI-driven world, urging immediate action to close the growing gap.

References

1. https://timesofindia.indiatimes.com/technology/tech-news/google-brain-founder-andrew-ng-on-why-it-is-still-important-to-learn-coding/articleshow/125247598.cms

2. https://www.finalroundai.com/blog/andrew-ng-ai-tips-2026

3. https://content.techgig.com/career-advice/top-10-developer-skills-to-learn-in-2026/articleshow/125129604.cms

4. https://www.coursera.org/in/articles/ai-skills

5. https://www.idnfinancials.com/news/58779/ai-expert-andrew-ng-programmers-are-still-needed-in-a-different-way

"There's one skill that is already emerging... it's time to get everyone to learn to code.... not just the software engineers, but the marketers, HR professionals, financial analysts, and so on - the ones that know how to code are much more productive than the ones that don't, and that gap is growing." - Quote: Andrew Ng - AI guru, Coursera founder

‌

‌

Term: Counterfactual

"A counterfactual is a hypothetical scenario or statement that considers what would have happened if a specific event or condition had been different from what actually occurred. In simple terms, it is a 'what if' or 'if only' thought process that contradicts the established facts." - Counterfactual

A counterfactual is a hypothetical scenario or statement that imagines what would have happened if a specific event, condition, or action had differed from what actually occurred. It represents a 'what if' or 'if only' thought process that directly contradicts established facts, enabling exploration of alternative possibilities for past or future events.

Counterfactual thinking involves mentally simulating outcomes contrary to reality, such as 'If I had not taken that sip of hot coffee, I would not have burned my tongue.' This cognitive process is common in reflection on mistakes, regrets, or opportunities, like pondering 'If only I had caught that flight, my career might have advanced differently.'1,2,3

Key Characteristics and Types

  • Additive vs. Subtractive: Additive counterfactuals imagine adding an action (e.g., 'If I had swerved, the accident would have been avoided'), while subtractive ones remove one (e.g., 'If the child had not cried, I would have focused on the road').3
  • Upward vs. Downward: Upward focuses on better alternatives, often leading to regret; downward considers worse ones, fostering relief.3
  • Mutable vs. Immutable: People tend to mutate exceptional or controllable events in their imaginings.1

Applications Across Disciplines

In causal inference, counterfactuals estimate effects by comparing observed outcomes to hypothetical ones, such as 'What would the yield be if a different treatment was applied to this plot?' They underpin concepts like potential outcomes in statistics.4,7

In philosophy and logic, counterfactuals are analysed as conditionals where the antecedent is false, symbolised as A ?? C (if A were the case, C would be), contrasting with material implications.6

In machine learning, counterfactual explanations clarify model decisions, e.g., 'If feature X changed to value x, the prediction would shift.'2

Everyday examples include regretting a missed job ('If I had not been late, I would have that promotion') or entrepreneurial reflection ('If we chose a different partner, the startup might have succeeded').3

Leading Theorist: Judea Pearl

The most influential modern theorist linking counterfactuals to strategy is Judea Pearl, a pioneering computer scientist and philosopher whose causal inference framework revolutionised how counterfactuals inform decision-making, policy analysis, and strategic planning.

Biography: Born in 1936 in Tel Aviv, Pearl emigrated to the US in 1960 after studying electrical engineering in Israel. He earned a PhD from Rutgers University in 1965 and joined UCLA, where he is now a professor emeritus. Initially focused on AI and probabilistic reasoning, Pearl developed Bayesian networks in the 1980s, earning the Turing Award in 2011 for advancing AI through probability and causality.

Relationship to Counterfactuals: Pearl's seminal work, Probabilistic Reasoning in Intelligent Systems (1988) and Causality (2000), formalised counterfactuals using structural causal models (SCMs). He defined the counterfactual query 'Y would be y had X been x' via do-interventions and potential outcomes, e.g., Y_x(u) = y denotes the value Y takes under intervention do(X=x) in unit u's background context.4 This 'ladder of causation'-from association to intervention to counterfactuals-enables strategic 'what if' analysis, such as evaluating policy impacts or business decisions by computing missing data: 'Given observed E=e, what is expected Y if X differed?'4

Pearl's framework aids strategists in risk assessment, A/B testing, and scenario planning, distinguishing correlation from causation. His do-calculus provides computable algorithms for counterfactuals, making them practical tools beyond mere speculation.4,7

References

1. https://conceptually.org/concepts/counterfactual-thinking

2. https://christophm.github.io/interpretable-ml-book/counterfactual.html

3. https://helpfulprofessor.com/counterfactual-thinking-examples/

4. https://bayes.cs.ucla.edu/PRIMER/primer-ch4.pdf

5. https://www.merriam-webster.com/dictionary/counterfactual

6. https://plato.stanford.edu/entries/counterfactuals/

7. https://causalwizard.app/inference/article/counterfactual

"A counterfactual is a hypothetical scenario or statement that considers what would have happened if a specific event or condition had been different from what actually occurred. In simple terms, it is a 'what if' or 'if only' thought process that contradicts the established facts." - Term: Counterfactual

‌

‌

Quote: Wingate, et al - MIT SMR

"It is tempting for a company to believe that it will somehow benefit from AI while others will not, but history teaches a different lesson: Every serious technical advance ultimately becomes equally accessible to every company." - Wingate, et al - MIT SMR

The Quote in Context

David Wingate, Barclay L. Burns, and Jay B. Barney's assertion that companies cannot sustain competitive advantage through AI alone represents a fundamental challenge to prevailing business orthodoxy. Their observation-that every serious technical advance ultimately becomes equally accessible-draws from decades of technology adoption patterns and competitive strategy theory. This insight, published in the MIT Sloan Management Review in 2025, cuts through the hype surrounding artificial intelligence to expose a harder truth: technological parity, not technological superiority, is the inevitable destination.

The Authors and Their Framework

David Wingate, Barclay L. Burns, and Jay B. Barney

The three researchers who authored this influential piece bring complementary expertise to the question of sustainable competitive advantage. Their collaboration represents a convergence of strategic management theory and practical business analysis. By applying classical frameworks of competitive advantage to the contemporary AI landscape, they demonstrate that the fundamental principles governing technology adoption have not changed, even as the technology itself has become more sophisticated and transformative.

Their central thesis rests on a deceptively simple observation: artificial intelligence, like the internet, semiconductors, and electricity before it, possesses a critical characteristic that distinguishes it from sources of lasting competitive advantage. Because AI is fundamentally digital, it is inherently copyable, scalable, repeatable, predictable, and uniform. This digital nature means that any advantage derived from AI adoption will inevitably diffuse across the competitive landscape.

The Three Tests of Sustainable Advantage

Wingate, Burns, and Barney employ a rigorous analytical framework derived from resource-based theory in strategic management. They argue that for any technology to confer sustainable competitive advantage, it must satisfy three criteria simultaneously:

  • Valuable: The technology must create genuine economic value for the organisation
  • Unique: The technology must be unavailable to competitors
  • Inimitable: Competitors must be unable to replicate the advantage

Whilst AI unquestionably satisfies the first criterion-it is undeniably valuable-it fails the latter two. No organisation possesses exclusive access to AI technology, and the barriers to imitation are eroding rapidly. This analytical clarity explains why even early adopters cannot expect their advantages to persist indefinitely.

Historical Precedent and Technology Commoditisation

The Pattern of Technical Diffusion

The authors' invocation of historical precedent is not merely rhetorical flourish; it reflects a well-documented pattern in technology adoption. When electricity became widely available, early industrial adopters gained temporary advantages in productivity and efficiency. Yet within a generation, electrical power became a commodity-a baseline requirement rather than a source of differentiation. The same pattern emerged with semiconductors, computing power, and internet connectivity. Each represented a genuine transformation of economic capability, yet each eventually became universally accessible.

This historical lens reveals a crucial distinction between transformative technologies and sources of competitive advantage. A technology can fundamentally reshape an industry whilst simultaneously failing to provide lasting differentiation for any single competitor. The value created by the technology accrues to the market as a whole, lifting all participants, rather than concentrating advantage in the hands of early movers.

The Homogenisation Effect

Wingate, Burns, and Barney emphasise that AI will function as a source of homogenisation rather than differentiation. As AI capabilities become standardised and widely distributed, companies using identical or near-identical AI platforms will produce increasingly similar products and services. Consider their example of multiple startups developing AI-powered digital mental health therapists: all building on comparable AI platforms, all producing therapeutically similar systems, all competing on factors beyond the underlying technology itself.

This homogenisation effect has profound strategic implications. It means that competitive advantage cannot reside in the technology itself but must instead emerge from what the authors term residual heterogeneity-the ability to create something unique that extends beyond what is universally accessible.

Challenging the Myths of Sustainable AI Advantage

Capital and Hardware Access

One common belief holds that companies with superior access to capital and computing infrastructure can sustain AI advantages. Wingate, Burns, and Barney systematically dismantle this assumption. Whilst it is true that organisations with the largest GPU farms can train the most capable models, scaling laws ensure diminishing returns. Recent models like GPT-4 and Gemini represent only marginal improvements over their predecessors despite requiring massive investments in data centres and engineering talent. The cost-benefit curve flattens dramatically at the frontier of capability.

Moreover, the hardware necessary for state-of-the-art AI training is becoming increasingly commoditised. Smaller models with 7 billion parameters now match the performance of yesterday's 70-billion-parameter systems. This dual pressure-from above (ever-larger models with diminishing returns) and below (increasingly capable smaller models)-ensures that hardware access cannot sustain competitive advantage for long.

Proprietary Data and Algorithmic Innovation

Perhaps the most compelling argument for sustainable AI advantage has centred on proprietary data. Yet even this fortress is crumbling. The authors note that almost all AI models derive their training data from the same open or licensed datasets, producing remarkably similar performance profiles. Synthetic data generation is advancing rapidly, reducing the competitive moat that proprietary datasets once provided. Furthermore, AI models are becoming increasingly generalised-capable of broad competence across diverse tasks and easily adapted to proprietary applications with minimal additional training data.

The implication is stark: merely possessing large quantities of proprietary data will not provide lasting protection. As AI research advances toward greater statistical efficiency, the amount of proprietary data required to adapt general models to specific tasks will continue to diminish.

The Theoretical Foundations: Strategic Management Theory

Resource-Based View and Competitive Advantage

The analytical framework employed by Wingate, Burns, and Barney draws from the resource-based view (RBV) of the firm, a dominant paradigm in strategic management theory. Developed primarily by scholars including Jay Barney himself (one of the article's authors), the RBV posits that sustainable competitive advantage derives from resources that are valuable, rare, difficult to imitate, and non-substitutable.

This theoretical tradition has proven remarkably durable precisely because it captures something fundamental about competition: advantages that can be easily replicated cannot persist. The RBV framework has successfully explained why some companies maintain competitive advantages whilst others do not, across industries and time periods. By applying this established theoretical lens to AI, Wingate, Burns, and Barney demonstrate that AI does not represent an exception to these fundamental principles-it exemplifies them.

The Distinction Between Transformative and Differentiating Technologies

A critical insight emerging from their analysis is the distinction between technologies that transform industries and technologies that confer competitive advantage. These are not synonymous. Electricity transformed manufacturing; the internet transformed commerce; semiconductors transformed computing. Yet none of these technologies provided lasting competitive advantage to any single organisation once they became widely adopted. The value they created was real and substantial, but it accrued to the market collectively rather than to individual competitors exclusively.

AI follows this established pattern. Its transformative potential is genuine and profound. It will reshape business processes, redefine skill requirements, unlock new analytical possibilities, and increase productivity across sectors. Yet these benefits will be available to all competitors, not reserved for the few. The strategic challenge for organisations is therefore not to seek advantage in the technology itself but to identify where advantage can still be found in an AI-saturated competitive landscape.

The Concept of Residual Heterogeneity

Beyond Technology: The Human Element

Wingate, Burns, and Barney introduce the concept of residual heterogeneity as the key to understanding where sustainable advantage lies in an AI-dominated future. Residual heterogeneity refers to the ability of a company to create something unique that extends beyond what is accessible to everyone else. It encompasses the distinctly human elements of business: creativity, insight, passion, and strategic vision.

This concept represents a return to first principles in competitive strategy. Before the AI era, before the digital revolution, before the internet, competitive advantage derived from human ingenuity, organisational culture, brand identity, customer relationships, and strategic positioning. The authors argue that these sources of advantage have not been displaced by technology; rather, they have become more important as technology itself becomes commoditised.

Practical Implications for Strategy

The strategic implication is clear: companies should not invest in AI with the expectation that the technology itself will provide lasting differentiation. Instead, they should view AI as a capability enabler-a tool that allows them to execute their distinctive strategy more effectively. The sustainable advantage lies not in having AI but in what the organisation does with AI that others cannot or will not replicate.

This might involve superior customer insight that informs how AI is deployed, distinctive brand positioning that AI helps reinforce, unique organisational culture that attracts talent capable of innovative AI applications, or strategic vision that identifies opportunities others overlook. In each case, the advantage derives from human creativity and strategic acumen, with AI serving as an accelerant rather than the source of differentiation.

Temporary Advantage and Strategic Timing

The Value of Being First

Whilst Wingate, Burns, and Barney emphasise that sustainable advantage cannot derive from AI, they implicitly acknowledge that temporary advantage has real strategic value. Early adopters can gain speed-to-market advantages, compress product development cycles, and accumulate learning curve advantages before competitors catch up. In fast-moving markets, a year or two of advantage can be decisive-sufficient to capture market share, build brand equity, establish customer switching costs, and create momentum that persists even after competitive parity is achieved.

The authors employ a surfing metaphor that captures this dynamic perfectly: every competitor can rent the same surfboard, but only a few will catch the first big wave. That wave may not last forever, but riding it well can carry a company far ahead. The temporary advantage is real; it is simply not sustainable in the long term.

Implications for Business Strategy and Innovation

Reorienting Strategic Thinking

The Wingate, Burns, and Barney framework calls for a fundamental reorientation of how organisations think about AI strategy. Rather than viewing AI as a source of competitive advantage, organisations should view it as a necessary capability-a baseline requirement for competitive participation. The strategic question is not "How can we use AI to gain advantage?" but rather "How can we use AI to execute our distinctive strategy more effectively than competitors?"

This reorientation has profound implications for resource allocation, talent acquisition, and strategic positioning. It suggests that organisations should invest in AI capabilities whilst simultaneously investing in the human creativity, strategic insight, and organisational culture that will ultimately determine competitive success. The technology is necessary but not sufficient.

The Enduring Importance of Human Creativity

Perhaps the most important implication of the authors' analysis is the reassertion of human creativity as the ultimate source of competitive advantage. In an era of technological hype, it is easy to assume that machines will increasingly determine competitive outcomes. The Wingate, Burns, and Barney analysis suggests otherwise: as technology becomes commoditised, the distinctly human capacities for creativity, insight, and strategic vision become more valuable, not less.

This conclusion aligns with broader trends in strategic management theory, which have increasingly emphasised the importance of organisational culture, human capital, and strategic leadership. Technology amplifies these human capabilities; it does not replace them. The organisations that will thrive in an AI-saturated competitive landscape will be those that combine technological sophistication with distinctive human insight and creativity.

Conclusion: A Sobering Realism

Wingate, Burns, and Barney's assertion that every serious technical advance ultimately becomes equally accessible represents a sobering but realistic assessment of competitive dynamics in the AI era. It challenges the prevailing narrative that early AI adoption will confer lasting competitive advantage. Instead, it suggests that organisations should approach AI with clear-eyed realism: as a transformative technology that will reshape industries and lift competitive baselines, but not as a source of sustainable differentiation.

The strategic imperative is therefore to invest in AI capabilities whilst simultaneously cultivating the human creativity, organisational culture, and strategic insight that will ultimately determine competitive success. The technology is essential; the human element is decisive. In this sense, the AI revolution represents not a departure from established principles of competitive advantage but a reaffirmation of them: lasting advantage derives from what is distinctive, difficult to imitate, and rooted in human creativity-not from technology that is inherently copyable and universally accessible.

References

1. https://www.sensenet.com/en/blog/posts/why-ai-can-provide-competitive-advantage

2. https://sloanreview.mit.edu/article/why-ai-will-not-provide-sustainable-competitive-advantage/

3. https://grtshw.substack.com/p/beyond-ai-human-insight-as-the-advantage

4. https://informedi.org/2025/05/16/why-ai-will-not-provide-sustainable-competitive-advantage/

5. https://shop.sloanreview.mit.edu/why-ai-will-not-provide-sustainable-competitive-advantage

"It is tempting for a company to believe that it will somehow benefit from AI while others will not, but history teaches a different lesson: Every serious technical advance ultimately becomes equally accessible to every company." - Quote: Wingate, et al

‌

‌
Share this on FacebookShare this on LinkedinShare this on YoutubeShare this on InstagramShare this on TwitterWhatsapp
You have received this email because you have subscribed to Global Advisors | Quantified Strategy Consulting as . If you no longer wish to receive emails please unsubscribe.
webversion - unsubscribe - update profile
© 2026 Global Advisors | Quantified Strategy Consulting, All rights reserved.
‌
‌