Select Page

Global Advisors | Quantified Strategy Consulting

SMPostStory
Quote: Brian Moynihan – Bank of America CEO

Quote: Brian Moynihan – Bank of America CEO

“You can see upwards of $6 trillion in deposits flow off the liabilities of a banking system… into the stablecoin environment… they’re either not going to be able to loan or they’re going to have to get wholesale funding and that wholesale funding will come at a cost that will increase the cost of borrowing.” – Brian Moynihan – Bank of America CEO

In the rapidly evolving landscape of digital finance, Brian Moynihan, CEO of Bank of America, issued a stark warning during the bank’s Q4 2025 earnings call on 15 January 2026. He highlighted the potential for up to $6 trillion in deposits – roughly 30% to 35% of total US commercial bank deposits – to shift from traditional banking liabilities into the stablecoin ecosystem if regulators permit stablecoin issuers to pay interest.1,2

Context of the Quote

Moynihan’s comments arose amid intense legislative debates over stablecoin regulation in the United States. With US commercial bank deposits standing at $18.61 trillion in January 2026 and the stablecoin market capitalisation at just $315 billion, the scale of this projected outflow underscores a profound threat to the fractional reserve banking model.1 Banks rely on low-cost customer deposits to fund loans to households and businesses, especially small and mid-sized enterprises. A mass migration to interest-bearing stablecoins would cripple lending capacity or force reliance on pricier wholesale funding, thereby elevating borrowing costs across the economy.1,2

This concern echoes broader industry pushback. Executives from JPMorgan and Bank of America have criticised proposals allowing stablecoin yields or rewards, viewing them as direct competition. A US Senate bill aimed at formalising cryptocurrency regulation has stalled amid lobbying from the American Bankers Association, which seeks to prohibit interest on stablecoins. Meanwhile, the GENIUS Act, signed by President Donald Trump in July 2025, marked the first explicit crypto legislation, spurring financial institutions to enter the space while intensifying turf wars as crypto firms pursue banking charters.3

Who is Brian Moynihan?

Brian Moynihan has led Bank of America since January 2010, steering the institution through post-financial crisis recovery, digital transformation, and now the crypto challenge. A Harvard Law graduate with a prior stint at FleetBoston Financial, Moynihan expanded BofA’s wealth management and consumer banking arms, growing assets to over $3 trillion. His tenure has emphasised regulatory compliance and innovation, yet he remains vocal on threats like stablecoins that could disrupt deposit stability.1,2

Backstory on Leading Theorists in Stablecoins and Banking Disruption

The stablecoin phenomenon builds on foundational ideas from monetary theorists and crypto pioneers who envisioned programmable money challenging centralised banking.

  • Satoshi Nakamoto: The pseudonymous creator of Bitcoin in 2008 laid the groundwork by introducing decentralised digital currency, free from central bank control. Bitcoin’s volatility spurred stablecoins as a bridge to everyday use.1
  • Vitalik Buterin: Ethereum’s co-founder (2015) enabled smart contracts, powering algorithmic stablecoins like DAI. Buterin’s vision of decentralised finance (DeFi) posits stablecoins as superior stores of value with yields from on-chain protocols, bypassing banks.3
  • Milton Friedman: The Nobel laureate’s 1969 proposal for a computer-based money system with fixed supply prefigured stablecoins. Friedman argued such systems could curb inflation better than fiat, influencing modern dollar-pegged tokens like USDT and USDC.1
  • Hayek and Free Banking Theorists: Friedrich Hayek’s Denationalisation of Money (1976) advocated competing private currencies, a concept realised in stablecoins issued by firms like Tether and Circle. This challenges the state’s monopoly on money issuance.3
  • Crypto Economists like Jeremy Allaire (Circle CEO): Allaire champions stablecoins as ‘internet-native money’ for payments and remittances, arguing they offer efficiency banks cannot match. His firm issues USDC, now integral to global transfers.1,3

These thinkers collectively argue that stablecoins democratise finance, offering transparency, yield, and borderless access. Yet banking leaders like Moynihan counter that without safeguards, this shift risks systemic instability by eroding the deposit base that fuels economic growth.2

Implications for Finance

Moynihan’s forecast spotlights a pivotal regulatory crossroads. Permitting interest on stablecoins could accelerate adoption, potentially reshaping payments, lending, and funding markets. Banks lobby for restrictions to preserve their model, while crypto advocates push for innovation. As frameworks like the GENIUS Act evolve, the battle over $6 trillion in deposits will define the interplay between traditional finance and blockchain.1,3

References

1. https://www.binance.com/sv/square/post/35227018044185

2. https://www.idnfinancials.com/news/60480/bofa-ceo-stablecoins-pay-interest-us6tn-in-bank-deposits-at-risk

3. https://www.emarketer.com/content/stablecoin-rules-jpmorgan-bofa-interest

"You can see upwards of $6 trillion in deposits flow off the liabilities of a banking system... into the stablecoin environment... they're either not going to be able to loan or they're going to have to get wholesale funding and that wholesale funding will come at a cost that will increase the cost of borrowing." - Quote: Brian Moynihan - Bank of America CEO

read more
Term: Right to Win

Term: Right to Win

“The ‘Right to Win’ (RTW) is a company’s unique, sustainable ability to succeed in a specific market by leveraging superior capabilities, products, and a differentiated ‘way to play’ that outperform competitors, giving them a better-than-even chance of creating value and growth.” – Right to Win

A company’s right to win is the recognition that it is better prepared than its competitors to attract and keep the customers it cares about, grounded in a sustainable competitive advantage that extends beyond short-term market positioning.1 This concept represents more than simply having superior resources; it is the ability to engage in any competitive market with a better-than-even chance of success consistently over time.3 The right to win emerges when a company aligns three interlocking strategic elements: a differentiated way to play, a robust capabilities system, and product and service fit that work together coherently.1

The Three Pillars of Right to Win

The foundation of a right to win rests on understanding what your company can do better than anyone else. Rather than pursuing growth indiscriminately across multiple areas, successful organisations focus on identifying three to six differentiating capabilities-the interconnected people, knowledge, systems, tools and processes that create distinctive value to customers.1,5 These capabilities differ fundamentally from assets; whilst assets such as facilities, machinery, and supplier connections can be replicated by competitors, capabilities cannot.1 The critical question becomes: “What do we do well to deliver value?”1

A well-developed way to play represents a chosen position in a market, grounded in understanding your capabilities and where the market is heading.1 This positioning must fulfil four essential criteria: there must be a market that values your approach; it must be differentiated from competitors’ ways to play; it must remain relevant given expected industry changes; and it must be supported by your capabilities system, making it feasible.1 Finally, the product and service fit ensures that offerings are directly aligned with the capabilities system, delivering superior returns to shareholders.1

Coherence acts as the binding agent across these three elements.1 Achieving alignment with one or even two elements proves insufficient; only when all three synchronise with one another and with the right market conditions can a company truly claim a sustainable right to win.1

Building and Sustaining Competitive Advantage

The right to win is not inherited; it is earned through strategic alignment and disciplined execution.2 This requires an in-depth understanding of the competitive landscape, customer expectations, and team capabilities.2 A strategy that leverages unique assets or insights creates a competitive moat, making it challenging for competitors to catch up, though execution remains where many organisations falter.2

Innovation and adaptability prove essential to sustaining this advantage.2 Organisations that continuously evolve, anticipate market shifts, and adapt their goods and services accordingly are more likely to maintain their competitive edge.2 This does not mean chasing every new trend but rather maintaining a keen sense of which innovations align with core competencies and long-term vision.2 Building a culture of excellence-attracting and nurturing top talent, fostering continuous improvement, and encouraging innovation-represents an often-overlooked yet significant asset in securing the right to win.2

Strategic Applications and Growth Pathways

Right-to-win strategies fall into four categories: customer-driven, capability-driven, value-chain-based, and those building on disruptive business models or technologies.4 The most utilised approach involves fulfilling unmet needs for existing customers that the core business does not currently address.4 However, the strategy delivering the biggest revenue gains involves leveraging core business capabilities-such as patents, technological know-how, or brand equity-to expand into adjacent and breakout businesses.4 Companies successfully utilising two or more right-to-win strategies to move into adjacent markets delivered 12 percentage points higher excess total shareholder return versus their subindustry peers.4

Assessing Your Right to Win

Organisations can evaluate their right to win through systematic analysis. This involves identifying the two most relevant competitors, determining three to six differentiating capabilities required for success, listing key assets and table-stakes activities, and rating performance across these dimensions.5 Differentiating capabilities should be specific and interconnected rather than merely listing functions or organisational units.5 For example, one of Apple’s differentiating capabilities is “innovation around customer interfaces to create better communications and entertainment experiences.”5 Assets, whilst less sustainable than capabilities, represent criteria important to the market and warrant inclusion in competitive assessment.5

Related Theorist: C.K. Prahalad and the Core Competence Framework

The concept of right to win draws significantly from the work of C.K. Prahalad (1941-2010), an influential Indian-American business theorist and consultant who fundamentally shaped modern strategic thinking through his development of the core competence framework. Prahalad’s seminal 1990 Harvard Business Review article, co-authored with Gary Hamel, “The Core Competence of the Corporation,” introduced the revolutionary idea that organisations should identify and leverage their unique, hard-to-imitate capabilities rather than pursuing diversification across unrelated business areas.1

Born in Bangalore, India, Prahalad earned his undergraduate degree in physics and mathematics before pursuing business education. He spent much of his career at the University of Michigan’s Ross School of Business, where he conducted extensive research on strategic management and organisational capability. His work challenged the prevailing strategic orthodoxy of the 1980s, which emphasised portfolio management and strategic business units. Instead, Prahalad argued that companies should view themselves as portfolios of core competencies-the collective learning and coordination of diverse production skills and technologies-rather than collections of discrete business units.

Prahalad’s framework directly underpins the right to win concept. He demonstrated that sustainable competitive advantage emerges not from owning assets but from developing distinctive capabilities that competitors cannot easily replicate. His research showed that companies like Sony, Honda, and 3M succeeded not because they possessed superior resources but because they had cultivated unique organisational capabilities in areas such as miniaturisation, engine design, or innovation processes. These capabilities enabled them to enter adjacent markets and create new products that competitors struggled to match.

Beyond core competence theory, Prahalad later developed the concept of the “bottom of the pyramid,” exploring how companies could create right-to-win strategies by serving low-income consumers in emerging markets through innovation and capability leverage. His work emphasised that strategic advantage comes from understanding what your organisation does distinctively well and then systematically building, protecting, and extending those capabilities across markets and customer segments.

Prahalad’s intellectual legacy remains central to contemporary strategic management. His insistence that capabilities-not assets-form the foundation of competitive advantage directly informs how modern organisations approach the right to win. His framework provides the theoretical scaffolding that explains why companies with seemingly fewer resources can outperform better-capitalised competitors: they possess superior, integrated capabilities that create distinctive value. This insight transformed strategic planning from a financial exercise into a capabilities-centred discipline, making Prahalad’s work indispensable to understanding the right to win in contemporary business strategy.

References

1. https://www.pwc.com/mt/en/publications/other/does-your-strategy-give-you-the-right-to-win.html

2. https://multifamilycollective.com/2024/02/strategy-how-do-we-define-our-right-to-win/

3. https://intrico.io/interview-best-practices/right-to-win

4. https://www.mckinsey.com/capabilities/growth-marketing-and-sales/our-insights/next-in-growth/adjacent-business-growth-making-the-most-of-your-right-to-win

5. https://www.strategyand.pwc.com/gx/en/unique-solutions/capabilities-driven-strategy/right-to-win-exercise.html

6. https://steemit.com/quality/@hefziba/the-right-to-play-and-the-right-to-win-and-how-to-design-quality-into-a-product

"The 'Right to Win' (RTW) is a company's unique, sustainable ability to succeed in a specific market by leveraging superior capabilities, products, and a differentiated 'way to play' that outperform competitors, giving them a better-than-even chance of creating value and growth." - Term: Right to Win

read more
Quote: Clayton Christensen

Quote: Clayton Christensen

“What’s important is to get out there and try stuff until you learn where your talents, interests, and priorities begin to pay off. When you find out what really works for you, then it’s time to flip from an emergent strategy to a deliberate one.” – Clayton Christensen – Author

This profound advice from Clayton Christensen encapsulates a timeless principle for personal and professional growth: the value of experimentation followed by focused commitment. Drawn from his bestselling book How Will You Measure Your Life?, the quote urges individuals to embrace trial and error in discovering their true strengths before committing to a structured path. Christensen, a renowned Harvard Business School professor, applies business strategy concepts to life’s big questions, advocating for an initial phase of exploration – termed ’emergent strategy’ – before shifting to a ‘deliberate strategy’ once clarity emerges.1,7

Who Was Clayton Christensen?

Clayton Magleby Christensen (1947-2020) was a Danish-American academic, author, and business consultant whose ideas reshaped management theory. Born in Salt Lake City, Utah, he earned a bachelor’s degree in economics from Brigham Young University, an MBA from Harvard, and a DBA from Harvard Business School. Christensen joined the Harvard faculty in 1992, where he taught for nearly three decades, influencing generations of leaders.1,5

His seminal work, The Innovator’s Dilemma (1997), introduced the theory of disruptive innovation, explaining how established companies fail by focusing on sustaining innovations for current customers while overlooking simpler, cheaper alternatives that disrupt markets from below. This concept has been applied to industries from technology to healthcare, predicting successes like Netflix over Blockbuster. Christensen authored over a dozen books, including The Innovator’s Solution and How Will You Measure Your Life? (2010, co-authored with James Allworth and Karen Dillon), which blends business insights with personal reflections drawn from his Mormon faith, family life, and battle with leukemia.5,6,7

In How Will You Measure Your Life?, Christensen draws parallels between corporate pitfalls and personal missteps, warning against prioritising short-term gains over long-term fulfilment. The quoted passage appears in a chapter on career strategy, using emergent and deliberate strategies as metaphors for navigating life’s uncertainties.7

Context of the Quote: Emergent vs Deliberate Strategy

Christensen distinguishes two strategic approaches, rooted in his research on successful companies. A deliberate strategy stems from conscious planning, data analysis, and long-term goals – ideal for stable, mature organisations like Procter & Gamble, which refines products based on market data.1 It requires alignment across teams, where every member understands their role in the bigger picture. However, it risks blindness to peripheral opportunities, as rigid focus on the original plan can miss disruptions.1,2

Conversely, an emergent strategy arises organically from bottom-up initiatives, experiments, and adaptations – common in startups like early Walmart, which pivoted from small-town stores after unplanned successes. Christensen notes that over 90% of thriving new businesses succeed not through initial plans but by iterating on emergent learnings, retaining resources to pivot when needed.1,5,6

The quote applies this duality to personal development: start with emergent exploration – trying diverse roles, hobbies, and pursuits – to uncover what aligns talents, interests, and priorities. Once viable paths emerge, switch to deliberate focus for sustained progress. This mirrors Honda’s accidental US motorcycle success, where employees’ side experiments trumped the formal plan.6

Leading Theorists on Emergent and Deliberate Strategy

Christensen built on foundational work by Henry Mintzberg, a Canadian management scholar. In his 1987 paper ‘Crafting Strategy’ and book Strategy Safari, Mintzberg challenged top-down planning, arguing strategies often emerge from patterns in daily actions rather than deliberate designs. He identified strategy as a ‘continuous, diverse, and unruly process’, blending deliberate intent with emergent flexibility – ideas Christensen explicitly referenced.2

  • Henry Mintzberg: Pioneered the emergent strategy concept in the 1970s-80s, critiquing rigid corporate planning. His ’10 Schools of Strategy’ framework contrasts design (deliberate) with learning (emergent) schools.2
  • Michael Porter: Christensen’s contemporary at Harvard, Porter championed deliberate competitive strategy via frameworks like the Five Forces and value chain (1980s). While Porter focused on positioning for advantage, Christensen highlighted how such strategies falter against disruption.1
  • Robert Burgelman: Stanford professor whose research on ‘intraorganisational ecology’ influenced Christensen, showing how autonomous units drive emergent strategies within firms like Intel.5

These theorists collectively underscore strategy’s dual nature: deliberate for execution, emergent for innovation. Christensen uniquely extended this to personal life, making abstract theory accessible for leadership, coaching, and self-management.3,4

Christensen’s insights remain vital for leaders balancing adaptability with purpose, reminding us that true success – in business or life – demands knowing when to explore and when to commit.

References

1. https://online.hbs.edu/blog/post/emergent-vs-deliberate-strategy

2. https://onlydeadfish.co.uk/2014/08/28/emergent-and-deliberate-strategy/

3. https://blog.passle.net/post/102fytx/clayton-christensen-how-to-enjoy-business-and-life-more

4. https://www.azquotes.com/quote/1410310

5. https://www.goodreads.com/work/quotes/138639-the-innovator-s-solution-creating-and-sustaining-successful-growth

6. https://www.businessinsider.com/clay-christensen-theories-in-how-will-you-measure-your-life-2012-7

7. https://www.goodreads.com/author/quotes/1792.Clayton_M_Christensen?page=17

8. https://www.azquotes.com/author/2851-Clayton_Christensen/tag/strategy

9. https://www.mstone.dev/values-how-will-you-measure-your-life/

“What’s important is to get out there and try stuff until you learn where your talents, interests, and priorities begin to pay off. When you find out what really works for you, then it’s time to flip from an emergent strategy to a deliberate one.” - Quote: Clayton Christensen

read more
Quote: Jamie Dimon – JP Morgan Chase CEO

Quote: Jamie Dimon – JP Morgan Chase CEO

“I think the harder thing to measure has always been tech projects. That’s been true my whole life. It’s also been true my whole life, the tech is what changes everything, like everything.” – Jamie Dimon – JP Morgan Chase CEO

Jamie Dimon’s candid observation captures a fundamental tension at the heart of modern business strategy: the profound impact of technology juxtaposed against the persistent challenge of measuring its value. Delivered during JPMorgan Chase’s 2026 Investor Day on 24 February, this remark came amid revelations of the bank’s unprecedented $19.8 billion technology budget – a 10% increase from 2025, with significant allocations to artificial intelligence (AI) projects.1,2,4 As CEO of the world’s largest bank by market capitalisation, Dimon’s perspective is shaped by decades of navigating technological shifts, from the rise of digital banking to the current AI boom.

Jamie Dimon’s Career and Leadership at JPMorgan Chase

Born in 1956 in New York City to Greek immigrant parents, Jamie Dimon began his career in finance at American Express in the 1980s, rising rapidly under the mentorship of Sandy Weill. He co-led the merger that created Citigroup in 1998 but parted ways acrimoniously in 2000. Dimon then transformed Bank One from near-collapse into a powerhouse, earning a reputation as a crisis manager. In 2004, he became CEO of JPMorgan Chase following its acquisition of Bank One, a role he has held for over two decades.3

Under Dimon’s stewardship, JPMorgan has become a technology leader in banking. The firm employs over 300,000 people, with tens of thousands in tech roles, and invests billions annually in innovation. Dimon has long championed tech as a competitive moat, famously urging investors to ‘trust him’ on spending despite vague ROI metrics. In 2026, this commitment manifests in a tech budget swelled by $2 billion, driven by AI for customer service, personalised insights, and developer tools, amid rising hardware costs from AI chip demand.1,5 Dimon predicts JPMorgan will be a ‘winner’ in the AI race, leveraging its data assets and No. 1 ranking in AI maturity among banks.1,3

Context of the Quote: JPMorgan’s 2026 Strategic Framework

The quote emerged in a Q&A at the 24 February 2026 event, responding to analyst pressure on tech ROI. CFO Jeremy Barnum highlighted technology as a major expense driver, up $9 billion overall, with $1.2 billion in investments including AI. Dimon acknowledged time savings from tech as ‘too vague’ to measure precisely, echoing lifelong observations from mainframes to cloud computing.1,2 This aligns with broader warnings: AI will revolutionise operations but displace jobs, necessitating societal preparation like retraining and phased adoption to avoid shocks, such as mass unemployment from autonomous trucks.4

JPMorgan is aggressively deploying AI – its large language model serves 150,000 users weekly – while planning ‘huge redeployment’ for affected staff. Executives like Marianne Lake stress paranoia in competition, quoting ‘Only the paranoid survive’. Rivals like Bank of America ($14 billion tech spend) underscore the sector-wide arms race.1

Leading Theorists on Technology Measurement and Impact

Dimon’s views resonate with seminal thinkers on technology’s intangible returns. Peter Drucker, the father of modern management, argued in The Practice of Management (1954) that knowledge workers’ output defies traditional metrics, prefiguring tech’s measurement woes. He coined ‘knowledge economy’, emphasising innovation’s long-term value over short-term quantification.[/latex]

Erik Brynjolfsson and Andrew McAfee, MIT economists, explore this in The Second Machine Age (2014), detailing how digital technologies yield ‘non-rival’ benefits – exponential productivity without proportional costs – hard to capture in GDP or ROI. Their ‘bounty vs. spread’ framework warns of uneven gains, mirroring Dimon’s job displacement concerns.4

Clayton Christensen’s The Innovator’s Dilemma (1997) explains why incumbents struggle with disruptive tech: metrics favour sustaining innovations, blinding firms to transformative ones. JPMorgan’s shift from infrastructure modernisation to AI-ready data exemplifies overcoming this.5

In AI specifically, Nick Bostrom’s Superintelligence (2014) and Stuart Russell’s Human Compatible (2019) address measurement beyond finance – aligning superintelligent systems with human values amid unpredictable impacts. Dimon’s pragmatic focus on phased integration echoes calls for cautious deployment.4

These theorists underscore Dimon’s point: technology’s true worth lies in reshaping ‘everything’, demanding faith in leadership over precise yardsticks. JPMorgan’s strategy embodies this, positioning the bank at the vanguard of finance’s technological frontier.

References

1. https://www.businessinsider.com/jpmorgan-tech-budget-ai-20-billion-jamie-dimon-2026-2

2. https://www.aol.com/articles/jpmorgan-spend-almost-20-billion-000403027.html

3. https://www.benzinga.com/markets/large-cap/26/02/50808191/jamie-dimon-predicts-jpmorgan-will-be-a-winner-in-ai-race-boosts-2026-tech-spend-to-nearly-20-billion

4. https://fortune.com/2026/02/25/jamie-dimon-society-prepare-ai-job-displacement/

5. https://finviz.com/news/321869/how-to-play-jpm-stock-as-tech-spend-ramps-in-2026-amid-ai-uncertainty

6. https://fintechmagazine.com/news/inside-jpmorgans-2026-stock-market-hopes-and-new-london-hq

"I think the harder thing to measure has always been tech projects. That's been true my whole life. It's also been true my whole life, the tech is what changes everything, like everything." - Quote: Jamie Dimon - JP Morgan Chase CEO

read more
Term: World model

Term: World model

“A world model is defined as a learned neural representation that simulates the dynamics of an environment, enabling an AI agent to predict future states and reason about the consequences of its actions.” – World model

A **world model** is an internal representation of the environment that an AI system creates to simulate the external world within itself. This learned neural representation enables an AI agent to predict future states, simulate the consequences of different actions before executing them in the real world, and reason about causal relationships, much like the human brain does when planning activities.1,3,6

At its core, a world model comprises key components:

  • Transition model: Predicts how the environment’s state changes based on the agent’s actions, such as a robot displacing an object by moving its hand.1
  • Observation model: Determines what the agent observes in each state, incorporating data from sensors, cameras, and other inputs.1
  • Reward model: In reinforcement learning contexts, forecasts rewards or penalties from actions in specific states.1

Unlike traditional machine learning, which maps inputs directly to outputs, world models foster a general understanding of environmental dynamics, enhancing performance in novel situations.1,4

Key Capabilities and Advantages

World models empower AI with:

  • Causality understanding: Grasping why events occur, beyond mere statistical correlations seen in large language models (LLMs) like GPT.1,2
  • Planning and reasoning: Simulating scenarios internally to select optimal actions, akin to chain-of-thought reasoning.1,3
  • Efficient learning: Requiring fewer examples, similar to a child grasping gravity after minimal observations.1
  • Transfer learning and generalisation: Applying knowledge across domains, such as adapting object manipulation skills.1
  • Intuitive physics: Comprehending basic physical principles, essential for real-world interaction.1,4

Trained on diverse data like videos, photos, audio, and text, world models provide richer grounding in reality than LLMs, which focus on text patterns.2,4,6

Role in Achieving Artificial General Intelligence (AGI)

Prominent figures like Yann LeCun (Meta), Demis Hassabis (Google DeepMind), and Yoshua Bengio (Mila) view world models as crucial for AGI, enabling safe, scientific, and intelligent systems that plan ahead and simulate outcomes.3 Recent advancements, such as DeepMind’s Genie 3 (August 2025), generate diverse 3D environments from text prompts, simulating realistic physics for AI training.1 Runway’s GWM-1 further advances general-purpose simulation for robotics and discovery.5

Best Related Strategy Theorist: Yann LeCun

**Yann LeCun**, Chief AI Scientist at Meta and a pioneer of convolutional neural networks (CNNs), is the foremost theorist championing world models as foundational for intelligent AI. LeCun describes them as internal predictive models that simulate real-world dynamics, incorporating modules for perception, prediction, cost/reward evaluation, and planning. This allows AI to ‘imagine’ action consequences, vital for robotics, autonomous vehicles, and AGI.2,3

Born in 1960 in France, LeCun earned his PhD in 1987 from Universite Pierre et Marie Curie, Paris, under supervision of Yves Le Cun (no relation). He popularised CNNs in the 1980s-1990s for handwriting recognition, co-founding the field of deep learning. Joining New York University as a professor in 2003, he co-directed the NYU Center for Data Science. In 2013, he became Meta’s first AI head, driving open-source initiatives like PyTorch.

LeCun’s advocacy for world models stems from his critique of LLMs’ limitations in causal reasoning and physical simulation. He argues they enable ‘objective-driven AI’ with energy-based models for planning, positioning world models as the path beyond pattern-matching to human-like intelligence. A Turing Award winner (2018) with Bengio and Hinton, LeCun’s vision influences labs worldwide, emphasising world models for safe, efficient real-world AI.2,3

References

1. https://deepfa.ir/en/blog/world-model-ai-agi-future

2. https://www.youtube.com/watch?v=qulPOUiz-08

3. https://www.quantamagazine.org/world-models-an-old-idea-in-ai-mount-a-comeback-20250902/

4. https://www.turingpost.com/p/topic-35-what-are-world-models

5. https://runwayml.com/research/introducing-runway-gwm-1

6. https://techcrunch.com/2024/12/14/what-are-ai-world-models-and-why-do-they-matter/

"A world model is defined as a learned neural representation that simulates the dynamics of an environment, enabling an AI agent to predict future states and reason about the consequences of its actions." - Term: World model

read more
Term: AI Data Centre

Term: AI Data Centre

“An AI Data Center is a highly specialized, power-dense physical facility designed specifically to train, deploy, and run artificial intelligence (AI) models, machine learning (ML) algorithms, and generative AI applications.” – AI Data Centre

This specialised facility diverges significantly from traditional data centres, which handle mixed enterprise workloads, by prioritising accelerated compute, ultra-high-bandwidth networking, and advanced power and cooling systems to manage dense GPU clusters and continuous data pipelines for AI tasks like model training, fine-tuning, and inference.1,2,4

Central to its operation are high-performance computing resources such as Graphics Processing Units (GPUs) and Tensor Processing Units (TPUs). GPUs excel in parallel processing, enabling rapid handling of billions of data points essential for AI model training, while TPUs offer tailored efficiency for AI-specific tasks, reducing energy consumption.2,3,5

High-speed networking is critical, employing technologies like InfiniBand, 400 Gbps Ethernet, and optical interconnects to facilitate seamless data movement across thousands of servers, preventing bottlenecks in distributed AI workloads.2,4

Robust storage systems-including distributed file systems and object storage-ensure swift access to vast datasets, model weights, and real-time inference data, with scalability to accommodate ever-growing AI requirements.1,2,3

Addressing the immense power density, advanced cooling systems are vital, often accounting for 35-40% of energy use, incorporating liquid cooling and thermal zoning to maintain efficiency and low Power Usage Effectiveness (PUE) for sustainability.2,4

Additional features include data centre automation, network security, and energy-efficient designs, yielding benefits like enhanced performance, scalability, cost optimisation, and support for innovation in fields such as big data analytics, natural language processing, and computer vision.3,5

Key Theorist: Jensen Huang and the GPU Revolution

The foremost strategist linked to the evolution of AI data centres is Jensen Huang, co-founder, president, and CEO of NVIDIA Corporation. Huang’s vision has positioned NVIDIA’s GPUs as the cornerstone of modern AI infrastructure, directly shaping the architecture of these power-dense facilities.2

Born in 1963 in Taiwan, Huang immigrated to the United States as a child. He earned a bachelor’s degree in electrical engineering from Oregon State University and a master’s from Stanford University. In 1993, at age 30, he co-founded NVIDIA with Chris Malachowsky and Curtis Priem, initially targeting 3D graphics for gaming and PCs. Huang recognised the parallel processing power of GPUs, pivoting NVIDIA towards general-purpose computing on GPUs (CUDA platform, launched 2006), which unlocked their potential for scientific simulations, cryptography, and eventually AI.2

Huang’s prescient relationship to AI data centres stems from his early advocacy for GPU-accelerated computing in machine learning. By 2012, Alex Krizhevsky’s use of NVIDIA GPUs to win the ImageNet competition catalysed the deep learning boom, proving GPUs’ superiority over CPUs for neural networks. Under Huang’s leadership, NVIDIA developed AI-specific hardware like A100 and H100 GPUs, Blackwell architecture, and full-stack solutions including InfiniBand networking via Mellanox (acquired 2020). These innovations address AI data centre challenges: massive parallelism for training trillion-parameter models, high-bandwidth interconnects for multi-node scaling, and power-efficient designs for dense racks consuming up to 100kW each.2,4

Huang’s biography reflects relentless innovation; he famously wore a black leather jacket onstage, symbolising his contrarian style. NVIDIA’s market cap surged from $3 billion in 2015 to over $3 trillion by 2024, propelled by AI demand. His strategic foresight-declaring in 2017 that “the era of AI has begun”-anticipated the hyperscale AI data centre boom, making NVIDIA indispensable to leaders like Microsoft, Google, and Meta. Huang’s influence extends to sustainability, pushing for efficient cooling and low-PUE designs amid AI’s energy demands.4

Today, virtually every major AI data centre relies on NVIDIA technology, underscoring Huang’s role as the architect of the AI infrastructure revolution.

References

1. https://www.aflhyperscale.com/articles/ai-data-center-infrastructure-essentials/

2. https://www.rcrwireless.com/20250407/fundamentals/ai-optimized-data-center

3. https://www.racksolutions.com/news/blog/what-is-an-ai-data-center/

4. https://www.f5.com/glossary/ai-data-center

5. https://www.lenovo.com/us/en/glossary/what-is-ai-data-center/

6. https://www.ibm.com/think/topics/ai-data-center

7. https://www.generativevalue.com/p/a-primer-on-ai-data-centers

8. https://www.sunbirddcim.com/glossary/data-center-components

"An AI Data Center is a highly specialized, power-dense physical facility designed specifically to train, deploy, and run artificial intelligence (AI) models, machine learning (ML) algorithms, and generative AI applications." - Term: AI Data Centre

read more
Quote: Clayton Christensen

Quote: Clayton Christensen

“Culture is a way of working together toward common goals that have been followed so frequently and so successfully that people don’t even think about trying to do things another way. If a culture has formed, people will autonomously do what they need to do to be successful.” – Clayton Christensen – Author

Clayton M. Christensen, the renowned Harvard Business School professor and author, offers a piercing definition of culture that underscores its invisible yet commanding influence on human behaviour. Drawn from his seminal 2010 book How Will You Measure Your Life?, this observation emerges from Christensen’s broader exploration of how personal and professional success hinges on aligning daily actions with enduring principles.1,2 The book, blending business acumen with life lessons, distils decades of research into practical wisdom for leaders, managers, and individuals navigating career and family demands.1,3

Christensen’s Life and Intellectual Journey

Born in 1952 in Salt Lake City, Utah, Christensen rose from humble roots to become one of the most influential thinkers in business strategy. A devout Mormon, he integrated faith with rigorous analysis, viewing truth in science and religion as harmonious.2,4 Educated at Brigham Young University, Oxford as a Rhodes Scholar, and Harvard Business School, he joined Harvard’s faculty in 1989. His breakthrough came with The Innovator’s Dilemma (1997), introducing disruptive innovation – the theory explaining how market-leading firms falter by ignoring low-end or new-market disruptions.5 This framework, applied across industries from steel to smartphones, earned him global acclaim and advisory roles with Intel, Kodak, and others.

Christensen’s later works, including How Will You Measure Your Life?, shift from corporate strategy to personal integrity. Co-authored with Jeff Dyer and Hal Gregersen, it warns against marginal compromises – ‘just this once’ temptations – that erode character over time.3 He argued management is ‘the most noble of professions’ when it fosters growth, motivation, and ethical behaviour.2,3 Stricken with leukemia in 2017 and passing in 2020, Christensen left a legacy of over 150,000 citations and millions of books sold, emphasising that true metrics of life lie in helping others become better people.2,4

The Context of the Quote in Christensen’s Philosophy

In How Will You Measure Your Life?, the quote illuminates how organisations – and lives – succeed through ingrained habits. Christensen posits that culture forms when proven paths to common goals become automatic, enabling autonomous action without constant oversight.1 This ties to his ‘resources, processes, priorities’ (RPP) framework: resources fuel action, processes habitualise it, and priorities direct it.2,4 A strong culture aligns these, creating ‘seamless webs of deserved trust’ that propel success, echoing his warnings against short-termism where leaders chase loud demands over lasting value.3

He contrasts virtuous cultures fostering positive-sum interactions and lucky breaks with toxic ones breeding zero-sum games and isolation.3 For leaders, cultivating culture means framing work to motivators – purpose, progress, relationships – so employees end days fulfilled, much like Christensen’s own ‘good day’ model.2

Leading Theorists on Organisational Culture

Christensen’s views build on foundational theorists who dissected culture’s role in management and leadership.

  • Edgar Schein (1935-2023): In Organizational Culture and Leadership (1985), Schein defined culture as ‘a pattern of shared basic assumptions’ learned through success, mirroring Christensen’s ‘frequently and successfully followed’ paths. Schein’s levels – artefacts, espoused values, basic assumptions – explain why entrenched cultures resist change, much like Christensen’s processes becoming ‘crushing liabilities’.5
  • Charles Handy (1932-2024): The Irish management guru’s Understanding Organizations (1976) classified cultures (power, role, task, person), influencing Christensen’s emphasis on autonomous success. Handy’s gods of management archetype underscores culture’s ritualistic hold.
  • Stephen Covey (1932-2012): In The 7 Habits of Highly Effective People (1989), Covey urged ‘keeping the main thing the main thing’ via principle-centred leadership, aligning with Christensen’s priorities and family-career balance.3
  • Peter Drucker (1909-2005): The ‘father of modern management’ declared ‘culture eats strategy for breakfast’, a maxim Christensen echoed by prioritising cultural processes over mere resources.5
  • Charles Munger (1924-2023): Berkshire Hathaway’s vice chairman complemented Christensen, praising ‘the right culture’ as a ‘seamless web of deserved trust’ enabling weak ties and serendipity.3

These thinkers collectively affirm culture as the bedrock of sustained performance, where unconscious alignment trumps enforced compliance. Christensen’s insight, rooted in their legacy, equips leaders to build environments where success feels inevitable.

References

1. https://www.goodreads.com/quotes/7256080-culture-is-a-way-of-working-together-toward-common-goals

2. https://www.toolshero.com/toolsheroes/clayton-christensen/

3. https://www.skmurphy.com/blog/2020/02/16/clayton-christensen-on-how-will-you-measure-your-life/

4. https://quotefancy.com/clayton-m-christensen-quotes/page/2

5. https://www.azquotes.com/author/2851-Clayton_Christensen

6. https://memories.lifeweb360.com/clayton-christensen/a0d52888-de6d-4246-bce9-26d9aaee0aac

“Culture is a way of working together toward common goals that have been followed so frequently and so successfully that people don’t even think about trying to do things another way. If a culture has formed, people will autonomously do what they need to do to be successful.” - Quote: Clayton Christensen

read more
Quote: Jeremy Barnum – Executive VP and CFO of JP Morgan Chase

Quote: Jeremy Barnum – Executive VP and CFO of JP Morgan Chase

“We’re growing. We’re onboarding new clients. In many cases, I’m looking at some of my colleagues on the corporate and investment bank, the growth in new clients comes with lending. That lending is relatively low returning then you eventually get other business. So yes, that’s an example of an investment today that as it matures, has higher returns.” – Jeremy Barnum – Executive VP & CFO of JP Morgan Chase

Jeremy Barnum, Executive Vice President and Chief Financial Officer of JPMorgan Chase, shared this perspective during a strategic framework and firm overview executive Q&A on 24 February 2026. His remarks underscore a core tenet of modern banking: initial client acquisition often demands upfront investments in low-margin activities like lending, which pave the way for higher-return opportunities as relationships mature.[SOURCE]

Barnum’s career trajectory exemplifies the blend of analytical rigour and strategic foresight essential for leading one of the world’s largest financial institutions. Joining JPMorgan Chase in 2007 as a managing director in treasury and risk management, he ascended rapidly through roles in investor relations and corporate development. By 2021, he was appointed CFO, succeeding Jennifer Piepszak, who transitioned to co-CEO of the commercial and investment bank. Under Barnum’s stewardship, JPMorgan has navigated volatile markets, including the acquisition of Goldman Sachs’ Apple Card portfolio, which contributed to a $2.2 billion pre-tax credit reserve build in Q4 2025, even as net income reached $13 billion and revenue climbed 7% to $46.8 billion.1

In the broader context of this quote, Barnum was addressing investor concerns about growth dynamics in the corporate and investment banking (CIB) division. New client onboarding frequently begins with lending – a relatively low-return activity due to compressed margins and credit risks – but evolves into a fuller ecosystem of services, including advisory, trading, and capital markets activities that deliver superior profitability over time. This ‘investment today for returns tomorrow’ model aligns with JPMorgan’s 2026 expense projections of $105 billion, driven by ‘structural optimism’ and the imperative to invest in technology, AI, and competitive positioning against fintech challengers like Revolut and SoFi, as well as traditional rivals like Charles Schwab.1

The discussion occurred against a backdrop of heightened competitive and regulatory pressures. Just weeks earlier, in January 2026, Barnum warned of the perils of President Donald Trump’s proposed 10% cap on credit card interest rates, arguing it would curtail credit access for higher-risk borrowers – ‘the people who need it the most’ – and force lenders to scale back operations in a fiercely competitive landscape.2,3 Consumer and community banking revenue rose 6% year-over-year to $19.4 billion, bolstered by 7% growth in card services, yet such policies threaten this momentum. JPMorgan’s tech budget is set to surge by $2 billion to $19.8 billion in 2026, emphasising investments to maintain primacy.5

Leading theorists on relationship banking and client lifecycle management provide intellectual foundations for Barnum’s approach. Jay R. Ritter, a pioneer in IPO and capital-raising research at the University of Florida, has long documented how initial public offerings often underperform short-term but enable firms to access deeper capital markets over time – a parallel to banking’s lending-to-ecosystem progression. Similarly, Arnoud W.A. Boot, a professor at the University of Amsterdam and ECB Shadow Monetary Policy Committee member, theorises in works like ‘Relationship Banking and the Death of the Middleman’ (2000) that banks derive sustained value from ‘household-specific’ information built through ongoing relationships, transforming low-margin entry points into high-return sticky business.

Robert M. Townsend, Caltech economist and Nobel laureate (2011, with Finn Kydland), extends this through his incomplete contracting models, showing how banks mitigate asymmetric information via repeated interactions, justifying upfront lending as a commitment device for future profitability. More contemporarily, Viral V. Acharya of NYU Stern emphasises in IMF and BIS papers the ‘credit ecosystem’ where initial low-yield loans signal credibility, unlocking cross-selling in a post-2008 regulatory environment marked by Basel III capital constraints. These frameworks validate JPMorgan’s strategy: lending as the ‘hook’ in a maturing client portfolio amid rising competition and policy risks.

Barnum’s comments, delivered mere hours before this analysis (on 25 February 2026), reflect real-time strategic clarity. As JPMorgan projects resilience in consumer and small business segments, this philosophy positions the firm to convert today’s investments into enduring leadership.1,4

References

1. https://fortune.com/2026/01/14/jpmorgan-ceo-cfo-staying-competitive-requires-investment/

2. https://www.businessinsider.com/jpmorgan-warning-on-credit-card-cap-interest-2026-1

3. https://neworleanscitybusiness.com/blog/2026/01/13/jpmorgan-credit-card-rate-cap-warning/

4. https://www.marketscreener.com/news/jpmorgan-cfo-jeremy-barnum-speaks-at-investor-update-ce7e5dd3db8ff425

5. https://www.aol.com/news/jpmorgan-spend-almost-20-billion-000403027.html

"We're growing. We're onboarding new clients. In many cases, I'm looking at some of my colleagues on the corporate and investment bank, the growth in new clients comes with lending. That lending is relatively low returning then you eventually get other business. So yes, that's an example of an investment today that as it matures, has higher returns." - Quote: Jeremy Barnum - Executive VP & CFO of JP Morgan Chase

read more
Term: Edge devices

Term: Edge devices

“Edge devices are physical computing devices located at the ‘edge. of a network, close to where data is generated or consumed, that run AI algorithms and models locally rather than relying exclusively on a centralised cloud or data center.” – Edge devices

Edge devices integrate edge computing with artificial intelligence, enabling real-time data processing on interconnected hardware such as sensors, Internet of Things (IoT) devices, smartphones, cameras, and industrial equipment. This local execution reduces latency to milliseconds, enhances privacy by retaining data on-device, and alleviates network bandwidth strain from constant cloud transmission.1,4,5

Unlike traditional cloud-based AI, where data travels to remote servers for computation, edge devices perform tasks like predictive analytics, anomaly detection, speech recognition, and machine vision directly at the source. This supports applications in autonomous vehicles, smart factories, healthcare monitoring, retail systems, and wearable technology.2,3,6

Key Characteristics and Benefits

  • Low Latency: Processes data in real time without cloud round-trips, critical for time-sensitive scenarios like defect detection in manufacturing.3,4
  • Bandwidth Efficiency: Reduces data transfer volumes by analysing locally and sending only aggregated insights to the cloud.1,5
  • Enhanced Privacy and Security: Keeps sensitive data on-device, mitigating breach risks during transmission.5,6
  • Offline Capability: Operates without constant internet connectivity, ideal for remote or unreliable networks.6,8

Best Related Strategy Theorist: Dr. Andrew Chi-Chih Yao

Dr. Andrew Chi-Chih Yao, a pioneering computer scientist, stands as the most relevant strategy theorist linked to edge devices through his foundational contributions to distributed computing and efficient algorithms, which underpin modern edge AI architectures. Born in Shanghai, China, in 1946, Yao earned his PhD from Harvard University in 1972 under advisor Patrick C. Fischer. He held faculty positions at MIT, Princeton, and Stanford before joining Tsinghua University in 2004 as Director of the Institute for Interdisciplinary Information Sciences (IIIS), dubbed the ‘Chinese Springboard for talents in computer science’.[external knowledge basis]

Yao’s relationship to edge devices stems from his seminal work on parallel and distributed algorithms, including the Yao minimax principle for computational complexity (1970s), which optimises resource allocation in decentralised systems-directly analogous to edge computing’s local processing paradigm. His PRAM (Parallel Random Access Machine) model formalised efficient parallelism on resource-constrained devices, influencing how AI models are deployed on edge hardware with limited power and compute.[external knowledge basis] Notably, Yao’s research on communication complexity minimises data exchange between nodes, mirroring edge devices’ strategy of local inference to cut cloud dependency-a core tenet echoed in edge AI literature.1,7

A Turing Award winner (2000) for contributions to computation theory, Yao’s strategic vision emphasises scalable, efficient computing at the periphery, shaping industries from IoT to AI. His mentorship of talents like Jack Ma (Alibaba founder) further extends his influence on practical deployments of edge technologies in global supply chains.

References

1. https://www.ibm.com/think/topics/edge-ai

2. https://www.micron.com/about/micron-glossary/edge-ai

3. https://zededa.com/glossary/edge-ai-computing/

4. https://www.flexential.com/resources/blog/beginners-guide-ai-edge-computing

5. https://www.splunk.com/en_us/blog/learn/edge-ai.html

6. https://www.f5.com/glossary/what-is-edge-ai

7. https://www.cisco.com/site/us/en/learn/topics/artificial-intelligence/what-is-edge-ai.html

8. https://blogs.nvidia.com/blog/what-is-edge-ai/

"Edge devices are physical computing devices located at the 'edge. of a network, close to where data is generated or consumed, that run AI algorithms and models locally rather than relying exclusively on a centralised cloud or data center." - Term: Edge devices

read more
Quote: David Viscott – Psychiatrist

Quote: David Viscott – Psychiatrist

“The purpose of life is to discover your gift. The work of life is to develop it. The meaning of life is to give your gift away.” – David Viscott – Psychiatrist

David Steven Viscott (1938-1996) was an American psychiatrist whose career fundamentally reshaped how mental health advice reached the general public. Born in Boston and educated at Dartmouth College and Tufts Medical School, Viscott emerged as one of the most influential figures in the history of therapeutic broadcasting, pioneering a distinctive approach to psychological counselling that prioritised speed, clarity and direct confrontation with uncomfortable truths.

The Revolutionary Radio Therapist

In 1980, Viscott made a pivotal decision that would define his legacy: he became one of the first psychiatrists with a medical degree to launch a full-time call-in radio show. Broadcasting from KABC-AM in Los Angeles, he transformed late-night radio into a therapeutic space where thousands of listeners could eavesdrop on-and learn from-the real struggles of callers seeking guidance. From 1980 until April 1993, Viscott became what his business partner Matt Small described as “everyone’s drive-time friend for years,” diagnosing callers’ emotional difficulties within minutes of hearing their problems and dispensing what became known as “tough love” therapy.

What distinguished Viscott from his contemporaries was his methodical approach. He called his technique the “Viscott Method,” a framework built on three foundational pillars: speed, simplicity and relentless pursuit of truth. Viscott held an unshakeable conviction that without confronting reality head-on, no individual could adequately address their underlying difficulties. This philosophy wasn’t merely rhetorical-it was operationalised through his therapeutic centres. In 1984, he established the Viscott Institute, which expanded into a chain of three Viscott Centers for Natural Therapy across Southern California, where trained therapists applied his methods in short-term interventions. The model was radical for its time: four sessions maximum, and clients departed with cassette recordings of their therapy and workbooks designed to facilitate self-discovery.

The Philosophy of Purpose and Gift

The quote attributed to Viscott-“The purpose of life is to discover your gift. The work of life is to develop it. The meaning of life is to give your gift away”-encapsulates the philosophical core of his therapeutic vision. This formulation appeared in his 1993 work Finding Your Strength in Difficult Times, a text that synthesised decades of clinical observation and radio counselling into actionable wisdom for readers navigating personal crises.

Viscott’s tripartite framework reflects a humanistic psychology tradition that emphasises self-actualisation and purposeful living. The concept of discovering one’s “gift”-one’s unique capacities and reason for existing-became central to his therapeutic brand. He believed that psychological distress often stemmed from individuals failing to recognise or develop their inherent talents, and that genuine healing required not merely symptom relief but existential clarity. The progression from discovery to development to generosity represents a maturation of consciousness: from self-awareness through disciplined growth to transcendent contribution.

This philosophy resonated powerfully with 1980s and 1990s audiences seeking meaning beyond material accumulation. Viscott positioned psychological work as inseparable from spiritual purpose, offering listeners a secular yet profound answer to questions of meaning that had traditionally belonged to religious or philosophical domains.

Intellectual Lineage and Theoretical Context

Viscott’s thinking emerged from and contributed to several significant currents in twentieth-century psychology and psychiatry. His emphasis on rapid diagnosis and direct intervention reflected the influence of brief therapy models that gained prominence in the 1960s and 1970s, particularly the work of Albert Ellis and his Rational Emotive Behaviour Therapy (REBT), which similarly prioritised identifying core beliefs and challenging them directly.

The humanistic psychology movement, championed by figures such as Carl Rogers and Abraham Maslow, profoundly shaped Viscott’s conception of the therapeutic relationship and human potential. Maslow’s hierarchy of needs and his concept of self-actualisation-the realisation of one’s full potential-provided theoretical scaffolding for Viscott’s insistence that discovering and developing one’s gift represented not a luxury but a psychological necessity. Where Maslow theorised that self-actualisation was the pinnacle of human motivation, Viscott operationalised this insight through accessible therapeutic techniques and media platforms.

Viscott also drew from existential psychology, particularly the work of Viktor Frankl, whose Man’s Search for Meaning (1946) argued that the primary human motivation was the search for meaning rather than pleasure or power. Frankl’s assertion that individuals could find purpose even in suffering aligned closely with Viscott’s therapeutic stance. The notion that meaning emerges through contribution-through “giving your gift away”-echoes Frankl’s emphasis on transcendence through service and creative expression.

Additionally, Viscott’s work reflected the broader cultural moment of the 1970s and 1980s, when self-help literature and therapeutic culture began permeating mainstream consciousness. Psychologist Joyce Brothers had pioneered radio psychology in the 1950s, discussing previously taboo topics such as sexual dysfunction. However, it was psychologist Toni Grant who, in the 1970s, revolutionised the format by taking live calls on air in Los Angeles-a model Viscott adopted and refined. Viscott’s innovation was to combine psychiatric training with McDonald’s-like efficiency, creating a scalable therapeutic model that democratised access to professional psychological guidance.

The Author and His Works

Viscott’s prolific authorship complemented his broadcasting career. His autobiography, The Making of a Psychiatrist (1973), became a bestseller, earned selection as a Book of the Month Club Main Selection, and received nomination for the Pulitzer Prize. The work offered readers an intimate account of psychiatric training whilst questioning professional orthodoxies-a dual achievement that established Viscott as both insider and critic of his discipline.

His subsequent publications-including The Language of Feelings (1975), Risking (1976), I Love You, Let’s Work It Out, The Viscott Method, and Emotional Resilience (1993)-consistently emphasised self-examination, emotional literacy and purposeful living. These works translated his radio methodology into literary form, allowing readers to apply his techniques independently. Finding Your Strength in Difficult Times (1993), which contains the gift-centred philosophy quoted above, represented a culmination of his thinking, offering guidance for individuals confronting life’s most challenging moments.

Legacy and Paradox

Viscott’s career embodied a profound paradox. The psychiatrist who authored Emotional Resilience and built a therapeutic empire around rapid problem-solving proved unable to resolve his own deepest difficulties. He died in October 1996, alone and financially depleted, apparently from heart disease. Friends and colleagues noted that despite his public confidence and therapeutic acumen, Viscott struggled with significant personal insecurities rooted in childhood experiences-his father’s emotional distance, anxieties about his physical appearance and stature, and an ego that, whilst driving his professional ambitions, simultaneously alienated those closest to him.

Yet this contradiction does not diminish his contribution. Viscott’s greatest achievement was recognising that psychological healing and personal meaning were not luxuries reserved for the wealthy or the analytically inclined, but fundamental human needs that could be addressed through accessible, direct intervention. His radio shows reached hundreds of thousands of listeners who might never have entered a therapist’s office. His books provided frameworks for self-understanding that transcended clinical jargon. His philosophy-that life’s purpose centres on discovering, developing and sharing one’s unique gifts-offered a secular yet spiritually resonant answer to existential questions that continue to preoccupy contemporary audiences.

The quote itself endures because it captures something essential: the conviction that human flourishing requires not merely the absence of suffering but the active pursuit of purpose, the disciplined cultivation of talent, and the generous contribution of one’s capacities to the world. In an era of increasing psychological fragmentation and meaning-seeking, Viscott’s tripartite formula remains a compelling articulation of what a purposeful life might entail.

References

1. https://en.wikipedia.org/wiki/David_Viscott

2. https://www.dorchesteratheneum.org/project/david-viscott-1938-1996/

3. https://www.latimes.com/archives/la-xpm-1996-10-15-me-54130-story.html

4. https://www.latimes.com/archives/la-xpm-1997-01-26-tm-22135-story.html

5. https://www.goodreads.com/book/show/1215412.The_Making_of_a_Psychiatrist

6. https://books.google.com/books/about/The_Making_of_a_Psychiatrist.html?id=93uZzobqDhwC

7. https://www.thriftbooks.com/w/the-making-of-a-psychiatrist_david-viscott/588808/

"The purpose of life is to discover your gift. The work of life is to develop it. The meaning of life is to give your gift away" - Quote: David Viscott

read more
Quote: Troy Rohrbaugh – Co-CEO of JP Morgan Chase Commercial and Investment Bank

Quote: Troy Rohrbaugh – Co-CEO of JP Morgan Chase Commercial and Investment Bank

“We’re doing a lot of lending. We’re not doing it to develop assets, like that’s not what we do. We’re doing it to be in the ecosystem to create a halo effect with our clients and create velocity in our portfolios.” – Troy Rohrbaugh – Co-CEO of JP Morgan Chase Commercial & Investment Bank

Troy Rohrbaugh’s statement encapsulates a fundamental shift in how leading investment banks approach credit deployment in the modern financial ecosystem. Rather than pursuing direct lending as a standalone profit centre-a strategy that has increasingly exposed competitors to concentration risk and late-cycle credit deterioration-JPMorgan’s Co-CEO of the Commercial & Investment Bank articulates a relationship-centric model that treats lending as a strategic tool for deepening client engagement and accelerating capital velocity across the firm’s broader platform.

The Context: A Decade of Market Evolution

Rohrbaugh’s remarks arrive at a critical inflection point in capital markets. The past decade has witnessed the proliferation of specialised direct lending vehicles, private credit funds, and non-bank lenders that have fundamentally altered the competitive landscape for traditional investment banks. What began as a niche alternative to syndicated lending has evolved into a multi-trillion-pound asset class, with some estimates suggesting global private credit markets now exceed $2 trillion in assets under management.

This expansion has created both opportunity and peril. Whilst direct lending has provided crucial capital to mid-market companies and sponsors during periods of traditional bank retrenchment, it has also incentivised a race-to-the-bottom mentality amongst certain participants. Asset aggregators-firms whose primary objective is to accumulate loans for fee generation rather than client service-have increasingly dominated deal flow, often accepting looser covenants, higher leverage multiples, and weaker documentation standards in pursuit of volume.

JPMorgan’s strategic positioning directly challenges this paradigm. By explicitly rejecting the asset-accumulation model, Rohrbaugh signals that the bank views direct lending not as a destination but as a waypoint within a comprehensive client relationship architecture.

The Strategic Rationale: Ecosystem Integration

The concept of the “halo effect” that Rohrbaugh references deserves particular attention. In organisational behaviour and marketing theory, the halo effect describes the cognitive bias whereby positive impressions in one domain influence perceptions across other domains. Applied to investment banking, this principle suggests that a bank’s willingness to provide flexible, relationship-oriented credit solutions-even at modest spreads-generates disproportionate downstream value through increased advisory mandates, capital markets activity, and treasury services.

This approach reflects a maturation in how sophisticated financial institutions conceptualise competitive advantage. Rather than optimising for individual transaction profitability, JPMorgan is optimising for relationship depth and cross-selling velocity. A client receiving direct lending support during a period when traditional bank credit is constrained develops institutional loyalty that translates into preferred status for subsequent M&A advisory, equity capital markets mandates, and treasury services.

The “velocity in our portfolios” component of Rohrbaugh’s statement refers to the acceleration of capital deployment and redeployment across JPMorgan’s various business lines. By maintaining direct lending capacity, the bank ensures it can respond rapidly to client needs, thereby increasing the frequency and volume of client interactions and transactions.

Theoretical Foundations: Relationship Banking and Stakeholder Capitalism

Rohrbaugh’s philosophy aligns with contemporary academic and practitioner discourse on relationship banking-a model that emphasises long-term client partnerships over transactional efficiency. This approach has deep historical roots in European banking traditions, particularly in Germany and Switzerland, where universal banks have long maintained comprehensive client relationships spanning lending, advisory, and capital markets services.

The intellectual architecture supporting this strategy draws from several theoretical traditions. First, the resource-based view of competitive advantage, articulated by strategist Jay Barney and others, suggests that sustainable competitive advantage derives not from individual transactions but from difficult-to-replicate relationship assets and institutional knowledge. JPMorgan’s direct lending capability, when deployed through a relationship lens, becomes precisely such an asset-difficult for pure-play asset managers to replicate because it requires deep industry expertise, credit judgment, and client intimacy.

Second, stakeholder capitalism theory-increasingly influential amongst institutional investors and regulators-posits that long-term firm value creation requires balancing the interests of multiple stakeholders: clients, employees, shareholders, and communities. By positioning direct lending as a client service rather than a profit centre, JPMorgan implicitly adopts a stakeholder framework that prioritises client outcomes alongside shareholder returns. This positioning has become strategically valuable as institutional investors increasingly scrutinise governance and stakeholder alignment.

Third, the concept of “solution-agnostic” banking-which JPMorgan executives have explicitly articulated-reflects principles from systems thinking and complexity theory. Rather than constraining clients to a predetermined menu of products, solution-agnostic banking treats each client situation as unique and selects from the full array of available tools. This requires organisational flexibility, deep expertise across multiple domains, and a culture that rewards relationship managers for identifying optimal solutions rather than maximising individual product sales.

The Competitive Landscape: Distinguishing JPMorgan’s Approach

JPMorgan’s direct lending strategy, as articulated by Rohrbaugh, stands in sharp contrast to the approaches adopted by several competitors. Whilst some investment banks have pursued direct lending primarily as a capital deployment vehicle-seeking to generate attractive risk-adjusted returns through proprietary credit selection-JPMorgan has deliberately constrained its direct lending exposure to approximately $14 billion on its own balance sheet, with an announced capacity of up to $50 billion.

This measured approach reflects several strategic calculations. First, it acknowledges the late-cycle credit environment that prevailed in early 2026. Rohrbaugh himself noted that base market volatility remained significantly elevated compared to pre-COVID levels, creating conditions where credit risk was being systematically underpriced. By limiting direct lending exposure, JPMorgan reduced its vulnerability to the credit deterioration that subsequently materialised in certain segments of the private credit market.

Second, the emphasis on underwriting standards-Rohrbaugh noted that JPMorgan’s direct lending assets are underwritten using the same rigorous standards applied to its core commercial and industrial (CNI) lending book-reflects a commitment to through-the-cycle credit quality. This contrasts sharply with certain competitors who adopted more lenient underwriting standards to compete for market share in a competitive direct lending environment.

Third, the integration of direct lending within a broader relationship banking framework allows JPMorgan to maintain pricing discipline. Rather than competing on spread in a commoditised direct lending market, the bank can justify premium pricing by offering comprehensive solutions and relationship depth that pure-play lenders cannot replicate.

Intellectual Influences: Modern Banking Theory

The theoretical foundations underlying Rohrbaugh’s approach reflect the influence of several contemporary banking theorists and practitioners. Anat Admati and Martin Hellwig, in their influential work on bank regulation and systemic risk, have emphasised the importance of relationship banking in maintaining financial stability. Their research suggests that banks focused on long-term client relationships develop superior credit judgment and are less prone to the herding behaviour that characterises transaction-focused institutions.

Similarly, the work of Viral Acharya and others on the shadow banking system has highlighted the risks associated with non-bank lenders that lack the regulatory oversight and capital requirements imposed on traditional banks. By positioning JPMorgan’s direct lending within a regulated, capital-constrained framework, Rohrbaugh implicitly acknowledges these systemic considerations.

The concept of “ecosystem” that Rohrbaugh invokes also reflects contemporary thinking in platform economics and network effects. Scholars such as Geoffrey Parker, Marshall Van Alstyne, and Sangeet Paul Platform have documented how platform businesses create value through network effects-the phenomenon whereby the value of a platform increases as more participants join. Applied to investment banking, JPMorgan’s ecosystem strategy suggests that the bank’s value proposition strengthens as it deepens its integration with clients across multiple service dimensions.

Practical Implementation: The 2026 Strategic Framework

Rohrbaugh’s philosophy translated into concrete strategic initiatives during 2026. JPMorgan announced a $1.5 trillion Sustainable and Responsible Investment (SRI) initiative, representing a 50 per cent increase from its historical $1 trillion deployment across technology, healthcare, and diversified industries. This initiative exemplifies the ecosystem approach: rather than treating sustainable finance as a separate product line, JPMorgan integrated it across its lending, advisory, and capital markets capabilities.

The bank’s expansion of its direct lending capacity to $50 billion, coupled with approximately $25 billion in partner capital, reflected a deliberate strategy to position itself as a comprehensive credit solutions provider without pursuing asset accumulation for its own sake. This positioning proved prescient, as the private credit market experienced significant stress in subsequent months, with certain non-bank lenders facing liquidity challenges and valuation pressures.

JPMorgan’s guidance for 2026 reflected confidence in this strategy. The bank projected mid-teens growth in investment banking fees and markets revenue, with potential for high-teens growth if market conditions remained constructive. Critically, this guidance was premised not on direct lending profitability but on the halo effects generated by comprehensive client service.

The Broader Implications: A Paradigm Shift in Investment Banking

Rohrbaugh’s articulation of JPMorgan’s direct lending philosophy signals a potential paradigm shift in how leading investment banks conceptualise their competitive positioning. Rather than pursuing specialisation and product-line optimisation-the dominant strategy of the 1990s and 2000s-the most sophisticated institutions are returning to relationship banking principles whilst leveraging technology and data analytics to enhance execution.

This shift reflects several underlying forces. First, the commoditisation of traditional investment banking services-driven by technology, regulatory standardisation, and increased competition-has compressed margins on individual transactions. This creates incentives for banks to increase transaction frequency and breadth rather than optimising individual transaction profitability.

Second, the rise of alternative asset managers and non-bank lenders has fragmented the financial ecosystem, creating opportunities for traditional banks to position themselves as integrators and orchestrators of diverse capital sources. JPMorgan’s direct lending strategy, viewed through this lens, represents an attempt to maintain relevance in an increasingly fragmented financial landscape.

Third, the increasing sophistication of institutional clients-particularly large sponsors and multinational corporations-has created demand for integrated solutions that transcend traditional product boundaries. Clients increasingly expect their primary financial advisors to provide seamless access to debt capital, equity capital, advisory services, and treasury solutions. Banks that can deliver this integration command premium valuations and client loyalty.

Risk Considerations and Market Validation

Rohrbaugh’s confidence in JPMorgan’s approach was validated by subsequent market developments. During the period immediately following his February 2026 remarks, the private credit market experienced significant stress, with certain non-bank lenders facing liquidity challenges and forced asset sales. JPMorgan’s measured approach to direct lending-constrained exposure, rigorous underwriting, and relationship focus-positioned the bank to capitalise on opportunities whilst avoiding the losses that befell more aggressive competitors.

The bank’s emphasis on underwriting standards proved particularly valuable. As credit conditions deteriorated, the superior credit quality of JPMorgan’s direct lending portfolio provided a competitive advantage, enabling the bank to maintain client relationships and expand market share amongst sponsors seeking reliable capital sources.

Rohrbaugh’s statement that he was “shocked that people are shocked” by private credit market stress reflected a sophisticated understanding of late-cycle dynamics. Rather than viewing credit deterioration as a surprise, JPMorgan’s leadership had anticipated elevated credit risk and positioned the firm accordingly.

Conclusion: A Sustainable Model for Modern Investment Banking

Troy Rohrbaugh’s articulation of JPMorgan’s direct lending philosophy-emphasising ecosystem integration, halo effects, and portfolio velocity over asset accumulation-represents a coherent strategic framework for navigating the complexities of modern investment banking. By explicitly rejecting the asset-aggregation model that characterises certain competitors, JPMorgan positions itself as a relationship-centric institution capable of delivering comprehensive solutions to sophisticated clients.

This approach reflects deep theoretical foundations in relationship banking, stakeholder capitalism, and platform economics, whilst remaining grounded in practical considerations of credit risk management and competitive positioning. As the financial services industry continues to evolve, Rohrbaugh’s philosophy offers a template for how traditional investment banks can maintain relevance and profitability in an increasingly fragmented and competitive landscape.

References

1. https://fintool.com/news/jpmorgan-ubs-conference-2026-capital-markets-outlook

2. https://www.investing.com/news/stock-market-news/jpmorgans-rohrbaugh-optimistic-on-2026-investment-banking-outlook-93CH-4497226

3. https://fintool.com/news/jpmorgan-private-credit-warning-q1-guidance

4. https://www.trustfinance.com/blog/jpmorgan-positive-2026-investment-banking-outlook

5. https://www.stocktitan.net/sec-filings/JPM/8-k-jpmorgan-chase-co-reports-material-event-3dab6edaae1a.html

6. https://www.morningstar.com/news/marketwatch/2026022425/im-shocked-that-people-are-shocked-says-jpmorgan-executive-about-private-credit-meltdown

"We're doing a lot of lending. We're not doing it to develop assets, like that's not what we do. We're doing it to be in the ecosystem to create a halo effect with our clients and create velocity in our portfolios." - Quote: Troy Rohrbaugh - Co-CEO of JP Morgan Chase Commercial & Investment Bank

read more
Term: Markov model

Term: Markov model

“A Markov model is a statistical tool for stochastic (random) processes where the future state depends only on the current state, not the entire past history-this is the Markov Property or “memoryless” property, making them useful for modeling systems like weather, finance, etc.” – Markov model

A Markov model is a statistical tool for stochastic (random) processes where the future state depends only on the current state, not the entire past history. This defining characteristic is known as the Markov property or “memoryless” property, rendering it highly effective for modelling systems such as weather patterns, financial markets, speech recognition, and chronic diseases in healthcare.1,2,4,5

Core Principles and Components

The simplest form is the Markov chain, which represents systems with fully observable states. It models transitions between states using a transition matrix, where rows denote current states and columns indicate next states, with each row’s probabilities summing to one. Graphically, states are circles connected by arrows labelled with transition probabilities.1,2,4

Formally, for a discrete-time Markov chain, the probability of transitioning from state i to j is given by the transition matrix P, where P_{ij} = Pr(X_{t+1}=j \mid X_t = i). The state at time t follows Pr(X_t = j) = \sum_i Pr(X_{t-1} = i) P_{ij}.4

Advanced variants include Markov decision processes (MDPs) for decision-making in stochastic environments, incorporating actions and rewards, and partially observable MDPs (POMDPs) where states are not fully visible. These extend to fields like AI, economics, and robotics.1,7

Applications Across Domains

  • Finance: Predicting market crashes or stock price movements via transition probabilities from historical data.1,5
  • Healthcare: Modelling disease progression for economic evaluations of interventions.6
  • Machine Learning: Markov chain Monte Carlo (MCMC) for Bayesian inference and sampling complex distributions.3,4
  • Other: Weather forecasting, search algorithms, fault-tolerant systems, and speech processing.1,4,8

Key Theorist: Andrey Andreyevich Markov

The preeminent theorist behind the Markov model is Russian mathematician Andrey Andreyevich Markov (1856-1922), who formalised these concepts in probability theory. Born in Ryazan, Russia, Markov studied at St. Petersburg University under Pafnuty Chebyshev, a pioneer in probability. He earned his doctorate in 1884 and became a professor there, though academic rivalries with colleagues like Dmitri Mendeleev led to his resignation in 1905.5

Markov’s seminal work began in 1906 with his analysis of Pushkin’s novel Eugene Onegin, applying chains to model letter sequences and refute Chebyshev’s independence assumptions in language-a direct precursor to modern Markov chains. He generalised this to stochastic processes satisfying the memoryless property, publishing key papers from 1906-1913. His contributions underpin applications in statistics, physics, and computing, earning the adjective “Markovian.” Markov’s rigorous mathematical framework proved invaluable for modelling real-world random systems, influencing fields from Monte Carlo simulations to AI.2,4,5

Despite personal hardships, including World War I and the Russian Revolution, Markov’s legacy endures through the foundational Markov chains that enable tractable predictions in otherwise intractable systems.2,4

References

1. https://www.techtarget.com/whatis/definition/Markov-model

2. https://en.wikipedia.org/wiki/Markov_model

3. https://www.publichealth.columbia.edu/research/population-health-methods/markov-chain-monte-carlo

4. https://en.wikipedia.org/wiki/Markov_chain

5. https://blog.quantinsti.com/markov-model/

6. https://pubmed.ncbi.nlm.nih.gov/10178664/

7. https://labelstud.io/blog/markov-models-chains-to-choices/

8. https://ntrs.nasa.gov/api/citations/20020050518/downloads/20020050518.pdf

9. https://taylorandfrancis.com/knowledge/Engineering_and_technology/Industrial_engineering_&_manufacturing/Markov_models/

10. https://www.youtube.com/watch?v=d0xgyDs4EBc

"A Markov model is a statistical tool for stochastic (random) processes where the future state depends only on the current state, not the entire past history—this is the Markov Property or "memoryless" property, making them useful for modeling systems like weather, finance, etc." - Term: Markov model

read more
Quote: Arthur Mensch – Arthur Mensch – Mistral CEO

Quote: Arthur Mensch – Arthur Mensch – Mistral CEO

“In real life, enterprises are complex systems, and you can’t solve that with a single abstraction like AGI. AGI, to a large extent, is a north star of ‘I’m going to make the system better over time.'” – Arthur Mensch – Mistral CEO

Arthur Mensch, CEO of Mistral AI, offers a grounded perspective on artificial general intelligence (AGI), emphasising its role as an aspirational guide rather than a practical fix for intricate business challenges. In a recent Big Technology Podcast interview with Alex Kantrowitz on 16 January 2026, Mensch highlighted how enterprises function as complex systems that defy singular abstractions like AGI, positioning it instead as a directional ‘north star’ for incremental system improvements. This view aligns with his longstanding scepticism towards AGI hype, rooted in his self-described strong atheism and belief that such rhetoric equates to ‘creating God’1,2,3,4.

Who is Arthur Mensch?

Born in Paris, Arthur Mensch, aged 31, is a French entrepreneur and AI researcher who co-founded Mistral AI in 2023 alongside former Meta engineers Timothée Lacroix and Guillaume Lample. Before Mistral, Mensch worked as an engineer at Google DeepMind’s Paris lab, gaining expertise in advanced AI models2,4. His venture quickly rose to prominence, positioning Europe as a contender in the AI landscape dominated by US giants. Mistral’s models, including open-weight offerings, have secured partnerships like one with Microsoft in early 2024, while attracting support from the French government and investors such as former digital minister Cédric O2,4. Mensch advocates for a ‘European champion’ in AI to counterbalance cultural influences from American tech firms, stressing that AI shapes global perceptions and values2. He warns against over-reliance on US competitors for AI standards, pushing for lighter European regulations to foster innovation4.

Context of the Quote

Mensch’s statement emerges amid intensifying AI debates, just two days before this post, on a podcast discussing real-world AI applications. It reflects his consistent dismissal of AGI as an unattainable, quasi-religious pursuit, a stance he reiterated in a 2024 New York Times interview: ‘The whole AGI rhetoric is about creating God. I don’t believe in God. I’m a strong atheist. So I don’t believe in AGI’1,2,3,4. Unlike peers forecasting AGI’s imminent arrival, Mensch prioritises practical AI tools that enhance productivity, predicting rapid workforce retraining needs within two years rather than a decade4. He critiques Big Tech’s open-source strategies as competitive ploys and emphasises culturally attuned AI development1,2. This podcast remark builds on those themes, applying them to enterprise complexity where iterative progress trumps hypothetical superintelligence.

Leading Theorists on AGI and Complex Systems

The discourse around AGI and its limits in complex systems draws from pioneering theorists in AI, cybernetics, and systems theory.

  • Alan Turing (1912-1954): Laid AI foundations with his 1950 ‘Computing Machinery and Intelligence’ paper, proposing the Turing Test for machine intelligence. He envisioned machines mimicking human cognition but did not pursue god-like generality, focusing on computable problems[internal knowledge].
  • Norbert Wiener (1894-1964): Founder of cybernetics, which studies control and communication in animals and machines. In Cybernetics (1948), Wiener described enterprises and societies as dynamic feedback systems resistant to simple models, prefiguring Mensch’s complexity argument[internal knowledge].
  • John McCarthy (1927-2011): Coined ‘artificial intelligence’ in 1956 at the Dartmouth Conference, distinguishing narrow AI from general forms. He advocated high-level programming for generality but recognised real-world messiness[internal knowledge].
  • Demis Hassabis: Google DeepMind CEO and Mensch’s former colleague, predicts AGI within years, viewing it as AI matching human versatility across tasks. Hassabis emphasises multimodal learning from games like AlphaGo4[internal knowledge].
  • Sam Altman and Elon Musk: OpenAI’s Altman warns of AGI risks like ‘subtle misalignments’ while pursuing it as transformative; Musk forecasts superhuman AI by late 2025 and sues OpenAI over profit shifts3,4. Both treat AGI as epochal, contrasting Mensch’s pragmatism.

These figures highlight a divide: early theorists like Wiener stressed systemic complexity, while modern leaders like Hassabis chase generality. Mensch bridges this by favouring commoditised, improvable AI over AGI mythology[TAGS].

Implications for AI and Enterprise

Mensch’s philosophy underscores AI’s commoditisation, where models like Mistral’s drive efficiency without superintelligence. This resonates in Europe’s push for sovereign AI, amid tags like commoditisation and artificial intelligence[TAGS]. As enterprises navigate complexity, his ‘north star’ metaphor encourages sustained progress over speculative leaps.

References

1. https://www.businessinsider.com/mistrals-ceo-said-obsession-with-agi-about-creating-god-2024-4

2. https://futurism.com/the-byte/mistral-ceo-agi-god

3. https://www.benzinga.com/news/24/04/38266018/mistral-ceo-shades-openais-sam-altman-says-obsession-with-reaching-agi-is-about-creating-god

4. https://fortune.com/europe/article/mistral-boss-tech-ceos-obsession-ai-outsmarting-humans-very-religious-fascination/

5. https://www.binance.com/en/square/post/6742502031714

6. https://www.christianpost.com/cartoon/musk-to-altman-what-are-tech-moguls-saying-about-ai-and-agi.html?page=5

"In real life, enterprises are complex systems, and you can’t solve that with a single abstraction like AGI. AGI, to a large extent, is a north star of 'I’m going to make the system better over time.'" - Quote: Arthur Mensch

read more
Quote: Andrej Karpathy – Previously Director of AI at Tesla, founding team at OpenAI

Quote: Andrej Karpathy – Previously Director of AI at Tesla, founding team at OpenAI

“Programming is becoming unrecognizable. You’re not typing computer code into an editor like the way things were since computers were invented, that era is over. You’re spinning up AI agents, giving them tasks in English and managing and reviewing their work in parallel.” – Andrej Karpathy – Previously Director of AI at Tesla, founding team at OpenAI

This statement captures a pivotal moment in the evolution of software development, where traditional coding practices are giving way to a new era dominated by AI agents. Spoken by Andrej Karpathy, a visionary in artificial intelligence, it reflects the rapid transformation driven by large language models (LLMs) and autonomous systems. Karpathy’s insight underscores how programming is shifting from manual code entry to orchestrating intelligent agents via natural language, marking the end of an era that began with the earliest computers.

About Andrej Karpathy

Andrej Karpathy is a leading figure in AI, renowned for his contributions to deep learning and computer vision. A founding member of OpenAI in 2015, he played a key role in pioneering advancements in generative models and neural networks. Later, as Director of AI at Tesla, he led the Autopilot vision team, developing autonomous driving technologies that pushed the boundaries of real-world AI deployment. Today, he is building Eureka Labs, an AI-native educational platform. His talks and writings, such as ‘Software Is Changing (Again),’ articulate the shift to ‘Software 3.0,’ where LLMs enable programming in natural language like English.123

Karpathy’s line struck a nerve because it didn’t describe a distant future. It sounded like a description of what many engineers were already starting to experience in early 2026. The shift he’s talking about is less about writing code and more about orchestrating work—breaking problems into pieces, describing them in plain language, and then supervising agents that actually execute them.

The February Leap: Codex 5.2 and Claude Code

What made this moment feel like a real inflection was the quality jump in early 2026. When tools like ChatGPT Codex 5.2 and Claude Code landed in February, they weren’t just “better autocomplete.” They could stay on task for long, multi?step workflows, recover from errors, and push through the kind of friction that used to send developers back to the keyboard.

Karpathy has described this himself: coding agents that “basically didn’t work before December and basically work since,” with noticeably higher quality, long?term coherence, and tenacity. The February releases crystallised that shift. What used to be a weekend project became something you could kick off, let the agent run for 20–30 minutes, and then review—all while thinking about the next layer of the system rather than the syntax of the current one.

A New Kind of Programming Workflow

The pattern Karpathy is describing is less “pair programming with an autocomplete” and more “manager?style delegation.” You frame a task in English, give the agent context, tools, and constraints, and then let it run multiple steps in parallel—installing dependencies, writing tests, debugging, and even documenting the outcome. You then review outputs, steer the next round, and gradually refine the agent’s instructions.

This isn’t a replacement for engineering judgment. It’s a layer on top: your job becomes decomposing work, defining what success looks like, and deciding which parts to hand off and which to keep close. The “productivity flywheel” turns faster when you can treat the agent as a high?leverage assistant that can keep going while you move up the stack.

Software 3.0, In Practice

Karpathy has long framed this as Software 3.0—the evolution of programming from:

  • Software 1.0: explicit code written in languages like C++ or Python, where the programmer spells out every step.

  • Software 2.0: neural networks trained on data, where the “program” is a dataset and training objective rather than a long list of rules.

  • Software 3.0: natural?language?driven agents that compose systems, debug problems, and manage long?running workflows, while still relying on 1.0 and 2.0 components underneath.

The February releases of Codex 5.2 and Claude Code made Software 3.0 feel tangible. It’s no longer a thought experiment; it’s something practitioners can use today for tasks that are well?specified and easy to verify—infrastructure setup, data pipelines, internal tooling, and boilerplate?heavy workflows.

What This Means for Practitioners

The implication isn’t that “everyone will be a programmer.” It’s that the nature of programming is changing. The most valuable skills are no longer just fluency in a language, but:

  • Decomposing complex work into agent?friendly tasks,

  • Designing interfaces and documentation that models can use effectively,

  • Building feedback loops and guardrails so agents can operate safely, and

  • Knowing when to lean in (complex, under?specified logic) and when to lean out (repetitive, well?structured work).

Karpathy’s point is that the default workflow is no longer “you write code line by line.” The era where the editor is the center of the universe is ending. Programming is becoming less about keystrokes and more about direction, oversight, and iteration—with AI agents as the new layer of execution in between.

Leading Theorists and Influences

Karpathy’s views draw from pioneers in AI and agents. Ilya Sutskever, his OpenAI co-founder, advanced sequence models like GPT, enabling natural language programming. At Tesla, Ashok Elluswamy and the Autopilot team influenced his emphasis on human-AI loops and ‘autonomy sliders.’ Broader influences include Andrew Ng, under whom Karpathy studied at Stanford, popularising deep learning education, and Yann LeCun, whose convolutional networks underpin vision AI. Recent agentic work echoes Yohei Nakajima’s BabyAGI (2023), an early autonomous agent framework, and Microsoft’s AutoGen for multi-agent systems. Karpathy positions agents as a new ‘consumer of digital information,’ urging infrastructure redesign for LLM autonomy.123

Implications for the Future

This shift promises unprecedented productivity but demands new skills: fluency across paradigms, agent management, and ‘applied psychology of neural nets.’ As Karpathy notes, ‘everyone is now a programmer’ via English, yet professionals must build for agents – rewriting codebases and creating agent-friendly interfaces. With LLM capabilities surging by late 2025, 2026 heralds a ‘high energy’ phase of industry adaptation.14

 

References

1. https://www.businessinsider.com/agentic-engineering-andrej-karpathy-vibe-coding-2026-2

2. https://www.youtube.com/watch?v=LCEmiRjPEtQ

3. https://singjupost.com/andrej-karpathy-software-is-changing-again/

4. https://paweldubiel.com/42l1%E2%81%9D–Andrej-Karpathy-quote-26-Jan-2026-

5. https://www.christopherspenn.com/2024/07/mind-readings-generative-ai-as-a-programming-language/

6. https://www.ycombinator.com/library/MW-andrej-karpathy-software-is-changing-again

7. https://karpathy.ai/tweets.html

 

"Programming is becoming unrecognizable. You’re not typing computer code into an editor like the way things were since computers were invented, that era is over. You're spinning up AI agents, giving them tasks in English and managing and reviewing their work in parallel." - Quote: Andrej Karpathy - Previously Director of AI at Tesla, founding team at OpenAI

read more
Term: Agent2Agent (A2A)

Term: Agent2Agent (A2A)

“The Agent2Agent (A2A) protocol is an open standard that enables different AI agents, built by various vendors and using diverse frameworks, to seamlessly communicate, collaborate, and coordinate on complex tasks.” – Agent2Agent (A2A)

A2A addresses the challenges of multi-agent systems by providing a vendor-neutral framework for agents to discover each other, exchange capabilities, delegate tasks, and manage complex workflows.1,2,3 It leverages familiar web standards such as HTTP, JSON-RPC, and Server-Sent Events (SSE) to ensure reliable, interoperable interactions while incorporating enterprise-grade security features like JWT and OIDC authentication.1

Key Features of A2A

  • Agent Discovery and Capabilities Exchange: Agents publish standardised ‘Agent Cards’ (JSON files) that detail their abilities, enabling dynamic discovery and task negotiation.1,3
  • Structured Task Management: Defines protocols for task delegation using unique task IDs, supporting states like submitted, working, and completed, ideal for long-running processes.1,3
  • Standards-Based Communication: Uses HTTP POST requests and structured JSON messages for consistent messaging between client agents (task initiators) and remote agents (task executors).1,3
  • Enterprise Security and Privacy: Includes encryption, fine-grained authorisation, payload validation, and support for various authentication schemes to protect data and identities.1,2
  • Support for Collaboration: Facilitates message exchanges for context sharing, real-time updates via asynchronous notifications, and dynamic UX negotiation.1,3

How A2A Works

A2A operates on a client-server model: the client agent formulates tasks and identifies suitable remote agents via Agent Cards, then communicates structured requests over HTTP.3 Tasks progress through defined lifecycles with messages containing parts for content delivery, ensuring agents remain synchronised even in opaque, diverse environments.1,3

For example, in e-commerce, an inventory agent could use A2A to collaborate with demand forecasting, customer service, and logistics agents to optimise supply chains.5

Key Theorist: Sundar Pichai and Google’s Role in A2A

No single ‘strategy theorist’ in the traditional academic sense originated A2A, as it is a practical engineering protocol driven by industry leaders. The most directly associated figure is **Sundar Pichai**, CEO of Google and Alphabet Inc., whose strategic vision propelled its development and announcement.4

Biography of Sundar Pichai

Born in 1972 in Madurai, India, Sundar Pichai grew up in a modest middle-class family. He excelled academically, earning a degree in metallurgical engineering from the Indian Institute of Technology Kharagpur in 1993. Pichai then pursued higher education in the US, obtaining an MS in materials science from Stanford University and an MBA from the Wharton School of the University of Pennsylvania.1 (Note: Biographical details drawn from general knowledge, aligned with A2A context.)

Joining Google in 2004, Pichai initially led product management for Google Chrome, transforming it into the world’s most-used browser through innovative strategies emphasising speed, security, and user-centric design. His success led to promotions: Vice President of Product Development (2008), overseeing Chrome OS and apps; Senior VP for Chrome and Android (2012); and Chief Business Officer (2014). In 2015, he became CEO of Google, and in 2019, CEO of parent company Alphabet Inc.4 (contextual link).

Relationship to A2A

Under Pichai’s leadership, Google prioritised AI agent interoperability as part of its broader AI strategy, culminating in the A2A protocol’s announcement via the Google Developers Blog in 2025.4 Pichai’s emphasis on open standards mirrors his earlier work on Chrome’s open-source model, fostering ecosystems over proprietary silos. A2A embodies his vision for ‘a new era of agent interoperability,’ enabling secure multi-agent collaboration across frameworks – much like Android unified mobile ecosystems.1,4

Pichai’s strategic oversight ensured A2A adhered to principles of discovery, interoperability, delegation, and trust, positioning Google as a leader in agentic AI infrastructure while inviting broad industry adoption through its open GitHub repository.7

Tags: Agent2Agent, A2A, agents, AI, artificial intelligence, term

References

1. https://www.solo.io/topics/ai-infrastructure/what-is-a2a

2. https://developer.pingidentity.com/identity-for-ai/agents/idai-what-is-a2a.html

3. https://www.descope.com/learn/post/a2a

4. https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability/

5. https://www.alumio.com/blog/what-is-a2a-agent2agent-ai-protocol

6. https://www.credal.ai/blog/what-is-agent2agent-a2a-protocol

7. https://github.com/a2aproject/A2A

8. https://ai.pydantic.dev/a2a/

9. https://www.youtube.com/watch?v=Tud9HLTk8hg

"The Agent2Agent (A2A) protocol is an open standard that enables different AI agents, built by various vendors and using diverse frameworks, to seamlessly communicate, collaborate, and coordinate on complex tasks." - Term: Agent2Agent (A2A)

read more
Quote: Arthur Mensch – Mistral CEO

Quote: Arthur Mensch – Mistral CEO

“There’s no such thing as one system that is going to be solving all the problems of the world. You don’t have any human able to solve every task in the world. You of course need some amount of specialisation to solve problems.” – Arthur Mensch – Mistral CEO

Arthur Mensch’s observation about specialisation in artificial intelligence reflects a fundamental principle that has shaped not only his work at Mistral AI, but also the broader trajectory of how we think about building intelligent systems. The statement emerges from a pragmatic understanding of complexity-one that draws parallels between human expertise and machine learning, whilst challenging the prevailing assumption that larger, more generalised models represent the inevitable future of AI.

The Context: A Moment of Inflection in AI Development

When Mensch made this statement on the Big Technology Podcast in January 2026, the artificial intelligence landscape was at a critical juncture. The initial euphoria surrounding large language models like GPT-4 and their apparent ability to handle diverse tasks had begun to give way to a more nuanced understanding of their limitations. Organisations deploying these systems were discovering that whilst general-purpose models could perform adequately across many domains, they rarely excelled in any single domain. The cost of running these massive systems, combined with their mediocre performance on specialised tasks, created an opening for a different approach-one that Mensch and Mistral AI have been actively pursuing since the company’s founding in May 2023.

Mensch’s background as a machine learning researcher with a PhD in machine learning and functional magnetic resonance imaging, combined with his experience at Google DeepMind working on large language models, positioned him uniquely to recognise this gap. His two co-founders, Guillaume Lample and Timothée Lacroix, brought complementary expertise from Meta’s AI research division. Together, they had witnessed firsthand the capabilities and constraints of cutting-edge AI systems, and they recognised that the industry was pursuing a path that, whilst impressive in breadth, lacked depth.

The Philosophy Behind Mistral’s Approach

Mistral AI’s strategy directly operationalises Mensch’s philosophy about specialisation. Rather than attempting to build a single monolithic system that claims to solve all problems, the company has focused on developing smaller, more efficient models that can be tailored to specific use cases. This approach has proven remarkably successful: within four months of founding, Mistral released its 7B model, which outperformed larger competitors in many benchmarks. The company achieved unicorn status-a valuation exceeding $1 billion-within its first year, a trajectory that vindicated Mensch’s conviction that specialisation was not merely philosophically sound but commercially viable.

The emphasis on smaller models that can run locally on devices, rather than requiring centralised cloud infrastructure, represents a practical manifestation of this specialisation principle. A financial services institution, for instance, can deploy a model specifically optimised for fraud detection or regulatory compliance, rather than relying on a general-purpose system that must compromise between countless competing objectives. A healthcare provider can implement a model trained on medical literature and patient data, rather than one diluted by training on the entire internet. This is not merely more efficient; it is fundamentally more effective.

Theoretical Foundations: The Specialisation Principle in Machine Learning

Mensch’s assertion draws upon well-established principles in machine learning and cognitive science. The concept of specialisation in learning systems has deep roots in the field. In the 1990s and 2000s, researchers including Yann LeCun and Geoffrey Hinton-pioneers in deep learning-recognised that neural networks trained on specific tasks often outperformed more generalised architectures. This principle, sometimes referred to as the “bias-variance tradeoff,” suggests that systems optimised for particular problems can achieve superior performance by accepting constraints that would be inappropriate for general-purpose systems.

The analogy to human expertise is particularly apt. A world-class cardiologist possesses knowledge and intuition that a general practitioner cannot match, despite the latter’s broader medical knowledge. This specialisation comes from years of focused study, deliberate practice, and exposure to patterns specific to their domain. Similarly, an AI system trained extensively on financial data, with architectural choices optimised for temporal sequences and numerical relationships, will outperform a general model on financial forecasting tasks. The human brain itself demonstrates this principle: different regions specialise in different functions, and whilst there is integration across these regions, the specialisation is fundamental to cognitive capability.

This principle also aligns with recent research in transfer learning and domain adaptation. Researchers including Fei-Fei Li at Stanford have demonstrated that models pre-trained on large, diverse datasets often require substantial fine-tuning to perform well on specific tasks. The fine-tuning process essentially involves re-specialising the model, suggesting that the initial generalisation, whilst useful as a starting point, is not the endpoint of effective AI development.

The Commoditisation Argument

Embedded within Mensch’s statement is an implicit argument about the commoditisation of AI. If a single system could genuinely solve all problems, it would represent the ultimate commodity-a universal tool that would rapidly become standardised and undifferentiated. The fact that no such system exists, and that the laws of machine learning suggest none can exist, means that competitive advantage in AI will increasingly accrue to those who can build specialised systems tailored to specific domains and use cases.

This has profound implications for the structure of the AI industry. Rather than a winner-take-all market dominated by a handful of companies with the largest models, Mensch’s vision suggests a more distributed ecosystem where numerous companies build specialised solutions. Mistral’s open-source strategy supports this vision: by releasing models that developers can fine-tune and adapt, the company enables a proliferation of specialised applications rather than enforcing dependence on a single centralised system.

The comparison to human society is instructive. We do not have a single human who solves all problems; instead, we have a complex division of labour with specialists in countless domains. The most advanced societies are those that have developed the most sophisticated mechanisms for specialisation and coordination. An AI ecosystem that mirrors this structure-with specialised systems coordinating to solve complex problems-may ultimately prove more capable and more resilient than one built around monolithic general-purpose systems.

Implications for the Future of Work and AI Deployment

Mensch has articulated elsewhere his vision for how AI will transform work. Rather than replacing human workers wholesale, AI will handle routine, well-defined tasks, freeing humans to focus on activities that require creativity, relationship management, and novel problem-solving. This vision is entirely consistent with the specialisation principle: specialised AI systems handle their specific domains, whilst humans focus on the uniquely human aspects of work. A specialised AI system for document processing, another for customer service routing, and another for data analysis can work in concert, each excelling in its domain, with human judgment and creativity orchestrating their outputs.

This approach also addresses concerns about AI safety and alignment. A specialised system optimised for a specific task, with clear boundaries and well-defined objectives, is inherently more interpretable and controllable than a general-purpose system trained to optimise for performance across thousands of disparate tasks. The constraints that make a system specialised also make it more trustworthy.

The Broader Intellectual Landscape

Mensch’s perspective aligns with emerging consensus among leading AI researchers. Yann LeCun, Chief AI Scientist at Meta, has increasingly emphasised the limitations of large language models and the need for AI systems with different architectures and training approaches for different tasks. Demis Hassabis, CEO of Google DeepMind, has similarly highlighted the importance of building AI systems with appropriate inductive biases for their intended domains. The field is gradually moving away from the assumption that scale and generality are sufficient, towards a more nuanced understanding of how to build effective AI systems.

This intellectual shift reflects a maturation of the field. The initial excitement about large language models was justified-they represented a genuine breakthrough in our ability to build systems that could engage in flexible, language-based reasoning. However, the assumption that this breakthrough would generalise to all domains, and that bigger models would always be better, has proven naive. The next phase of AI development will likely be characterised by greater diversity in approaches, architectures, and training methodologies, with specialisation playing an increasingly central role.

Mensch’s Role in Shaping This Future

Arthur Mensch’s significance lies not merely in his articulation of these principles, but in his demonstrated ability to execute on them. Mistral AI’s rapid ascent-achieving a $2.1 billion valuation within approximately two years of founding-suggests that the market recognises the validity of the specialisation approach. The company’s success in attracting top talent, securing substantial venture funding, and building a platform that developers actively choose to build upon indicates that Mensch’s vision resonates with practitioners who understand the practical constraints of deploying AI systems.

In 2024, Mensch was recognised on TIME’s 100 Next list, an acknowledgment of his influence on the future direction of technology. The recognition highlighted his ability to combine “bold vision with execution,” his commitment to democratising AI through open-source models, and his foresight in addressing gaps overlooked by others. These qualities-vision, execution, and attention to overlooked opportunities-are precisely what the specialisation principle requires.

Mensch’s background as an academic researcher who transitioned to entrepreneurship also shapes his approach. Unlike entrepreneurs who might prioritise rapid growth and market dominance above all else, Mensch brings a researcher’s commitment to understanding fundamental principles. His insistence on specialisation is not a marketing narrative but a reflection of his deep understanding of how learning systems actually work.

Conclusion: A Principle for the Age of AI

The statement that “there’s no such thing as one system that is going to be solving all the problems of the world” may seem obvious in retrospect, but it represents a crucial corrective to the prevailing assumptions of the AI industry. It grounds AI development in principles drawn from human expertise, cognitive science, and machine learning theory. It suggests that the future of AI is not a race to build ever-larger models, but rather a more sophisticated ecosystem of specialised systems, each optimised for its domain, working in concert to solve complex problems.

For organisations deploying AI, for researchers developing new approaches, and for policymakers considering how to regulate AI development, Mensch’s principle offers clear guidance: invest in specialisation, build systems with appropriate constraints for their domains, and recognise that the most powerful AI systems will likely be those that do one thing exceptionally well, rather than many things adequately. In an age of increasing complexity, specialisation is not a limitation but a necessity-and a source of genuine competitive advantage.

References

1. https://www.allamericanspeakers.com/celebritytalentbios/Arthur+Mensch/462557

2. https://www.mckinsey.com/featured-insights/insights-on-europe/videos-and-podcasts/creating-a-european-ai-unicorn-interview-with-arthur-mensch-ceo-of-mistral-ai

3. https://blog.eladgil.com/p/discussion-w-arthur-mensch-ceo-of

4. https://time.com/collections/time100-next-2024/7023471/arthur-mensch-2/

5. https://thecreatorsai.com/p/the-story-of-arthur-mensch-how-to

6. https://www.antoinebuteau.com/lessons-from-arthur-mensch/

"There’s no such thing as one system that is going to be solving all the problems of the world. You don’t have any human able to solve every task in the world. You of course need some amount of specialisation to solve problems." - Quote: Arthur Mensch

read more
Quote: Jamie Dimon – JP Morgan Chase CEO

Quote: Jamie Dimon – JP Morgan Chase CEO

“I see a couple people doing some dumb things. They’re just doing dumb things to create NII.” – Jamie Dimon – JP Morgan Chase CEO

In a candid assessment delivered at JPMorgan Chase’s 2026 company update on 23 February, CEO Jamie Dimon voiced profound concerns about the financial landscape, drawing direct parallels to the reckless lending practices that precipitated the 2008 global financial crisis. He observed competitors engaging in imprudent strategies purely to inflate net interest income (NII), a key profitability metric derived from lending spreads and investments1,3. This remark underscores Dimon’s longstanding vigilance amid buoyant markets, where high asset prices and surging volumes foster complacency1,2.

Jamie Dimon’s Background and Leadership

Jamie Dimon, born in 1956 in New York to Greek immigrant parents, embodies the archetype of a battle-hardened banker. Educated at Tufts University and Harvard Business School, he ascended through the ranks at American Express and Citigroup before co-founding Bank One in 1991, where he orchestrated a remarkable turnaround. In 2004, he assumed the helm of JPMorgan Chase following its acquisition of Bank One, steering the institution through the 2008 crisis as one of the few major banks to emerge unscathed. Under his stewardship, JPMorgan has ballooned into the world’s most valuable bank by market capitalisation, with Dimon earning renown for his prescient risk management and forthright annual shareholder letters1. His tenure has been marked by navigating geopolitical tensions, regulatory scrutiny, and technological disruptions, all while prioritising capital strength over opportunistic growth.

Context of the Quote: A Market on the Brink?

Dimon’s comments arrived against a backdrop of intensifying competition in lending and private credit markets, where firms scramble to capture market share amid elevated interest rates and economic optimism. He likened the current environment to 2005-2007, when ‘the rising tide was lifting all boats’ and excessive leverage permeated the system, culminating in subprime mortgage meltdowns1,2,3. Recent indicators, such as the collapse of subprime auto lender Tricolor Holdings and debt-burdened First Brands, evoked Dimon’s ‘cockroach theory’ – spotting one signals an infestation1. Broader anxieties include artificial intelligence’s disruptive potential across sectors like software, utilities, and telecommunications, mirroring unforeseen vulnerabilities exposed in 20082,3. Despite S&P 500 highs, Dimon cautioned that credit cycles invariably turn, with surprises lurking in unexpected quarters3. JPMorgan, he affirmed, adheres strictly to underwriting standards, forgoing business rather than compromising1.

Leading Theorists on Financial Crises and Risk-Taking

Dimon’s perspective resonates with seminal theories on financial instability. Hyman Minsky, the American economist whose ‘financial instability hypothesis’ (developed in the 1970s and 1980s) posits that stability breeds complacency, prompting speculative and Ponzi financing schemes that amplify booms into busts. Minsky argued that prolonged prosperity erodes risk aversion, much as Dimon describes today’s ‘dumb things’ to chase NII1.

Complementing this, Charles Kindleberger’s Manias, Panics, and Crashes (1978, updated editions) outlines the anatomy of bubbles: displacement, boom, euphoria, profit-taking, and panic. Kindleberger, building on Kindleberger’s historical analyses, highlighted herd behaviour and leverage as crisis harbingers, echoing Dimon’s pre-2008 parallels2.

Modern extensions include Raghuram Rajan, former IMF Chief Economist and Reserve Bank of India Governor, whose 2005 Jackson Hole speech presciently warned of incentives driving financial institutions towards systemic risks. Rajan’s ‘search for yield’ concept – akin to boosting NII through lax lending – anticipated 2008 excesses3.

Nouriel Roubini, dubbed ‘Dr Doom’, forecasted the 2008 subprime debacle in 2006, emphasising global imbalances, debt overhangs, and asset bubbles. His framework aligns with Dimon’s cycle warnings, stressing confluence events like AI disruptions or policy shifts2.

These theorists collectively illuminate Dimon’s caution: markets’ euphoria masks fragility, demanding disciplined risk assessment amid competitive pressures.

Implications for Investors and Markets

  • Heightened Vigilance: Dimon’s stance signals potential volatility in private credit and lending, urging scrutiny of banks’ NII strategies.
  • Sectoral Risks: AI-driven upheavals could mirror 2008’s utility surprises, impacting software and beyond.
  • JPMorgan’s Edge: Conservative positioning may yield resilience, as proven in prior downturns.

Dimon’s words serve as a clarion call: prosperity’s siren song often precedes turbulence. Prudent navigation demands heeding history’s lessons.

References

1. https://www.businessinsider.com/jamie-dimon-banks-doing-dumb-things-2008-credit-crisis-warning-2026-2

2. https://economictimes.com/markets/stocks/news/jpmorgan-ceo-jamie-dimon-warns-ai-and-dumb-things-can-trigger-a-2008-like-crisis/articleshow/128770717.cms

3. https://www.news18.com/business/banking-finance/jpmorgan-chase-ceo-warns-of-dumb-risk-taking-by-financial-firms-sees-echoes-of-2008-crisis-ws-l-9926903.html

4. https://en.sedaily.com/international/2026/02/24/jpmorgan-ceo-dimon-warns-of-pre-2008-crisis-similarities

"I see a couple people doing some dumb things. They're just doing dumb things to create NII." - Quote: Jamie Dimon - JP Morgan Chase CEO

read more
Term: AI skills

Term: AI skills

“Skills are essentially curated instructions containing best practices, guidelines, and workflows that AI can reference when performing particular types of work. They’re like expert manuals that help AI produce higher-quality outputs for specialised tasks.” – AI skills

AI skills are structured sets of curated instructions, best practices, guidelines, and workflows that artificial intelligence systems reference when performing particular types of work. They function as expert manuals or knowledge repositories, enabling AI to produce higher-quality outputs for specialised tasks by drawing on accumulated domain expertise and proven methodologies.

Unlike general-purpose AI capabilities, skills represent a layer of curation and refinement that transforms raw AI capacity into contextually appropriate, task-specific performance. They embody the principle that filter intelligence-the ability to distinguish valuable information from noise-has become essential in an AI-driven world, where the volume of available data and potential outputs far exceeds what any individual or system can meaningfully process.

Core Characteristics

  • Structured Knowledge: Skills organise information into actionable formats that AI systems can readily access and apply, rather than requiring the system to search through unstructured data.
  • Domain Specificity: Each skill is tailored to particular types of work, ensuring that AI outputs reflect the nuances, standards, and best practices of that domain.
  • Quality Enhancement: By constraining AI outputs to established guidelines and proven workflows, skills improve consistency, accuracy, and relevance compared to unconstrained generation.
  • Continuous Refinement: Like knowledge curation more broadly, skills require ongoing maintenance, verification, and updating to remain accurate and aligned with evolving practices.
  • Human-AI Collaboration: Skills represent the intersection of human expertise and AI capability-humans curate and validate the instructions; AI applies them at scale.

Practical Applications

AI skills manifest across multiple contexts:

  • Learning and Development: Curated training materials, course recommendations, and procedural documentation that AI systems use to personalise employee learning pathways and deliver relevant content.
  • Content Generation: Guidelines for tone, style, accuracy standards, and domain-specific terminology that shape AI-generated text, ensuring outputs match organisational voice and quality expectations.
  • Technical Documentation: Structured workflows and best practices that enable AI to generate or organise software documentation, reducing search time and improving accessibility.
  • Knowledge Management: Taxonomies, metadata standards, and verification protocols that help AI systems organise, categorise, and validate information within organisational knowledge bases.
  • Decision Support: Curated decision trees, risk assessment frameworks, and contextual guidelines that enable AI to provide recommendations aligned with organisational values and risk tolerance.

The Relationship to Filter Intelligence

AI skills are fundamentally about curation-the process of selecting, organising, verifying, and enriching information to make it more useful and trustworthy. In an age where AI can generate vast quantities of content and analysis, the critical human skill is no longer the ability to process information (which AI can do at scale) but rather the ability to filter, judge, and curate what matters.

This reflects a broader shift in how organisations and individuals must operate. Traditional intelligence-the ability to learn facts and processes-can now be outsourced to AI. What cannot be outsourced is the judgment required to determine which AI outputs are accurate, which are misleading, and which are worth acting upon. AI skills encode this judgment into reusable, systematised form.

Implementation Considerations

Effective AI skills require:

  • Clear ownership and accountability for skill development and maintenance
  • Regular audits to identify outdated or conflicting guidance
  • Verification processes to ensure accuracy and relevance
  • Accessible documentation that explains not just what to do but why and when
  • Integration with broader content governance policies
  • Feedback loops that allow AI systems and human users to surface gaps or failures in skill application

Related Theorist: Charles Fadel

Charles Fadel is an educational theorist and thought leader whose work directly addresses the role of curation in an AI-driven world. His framework for education in the age of artificial intelligence places curation at the centre of how organisations and individuals must adapt.

Biographical Context

Fadel is the founder and chairman of the Centre for Curriculum Redesign, an international non-profit organisation dedicated to rethinking education for the 21st century. He has held leadership roles at the World Economic Forum and has been instrumental in developing competency frameworks that emphasise skills beyond traditional knowledge acquisition. His background spans education policy, curriculum design, and futures thinking, positioning him at the intersection of pedagogy and technological change.

Relationship to AI Skills and Curation

In his work Education for the Age of AI, Fadel articulates a vision in which curation becomes a foundational competency. He argues that as AI systems become more powerful and capable of handling routine information processing, the human role must shift toward curating knowledge rather than merely acquiring it. This directly parallels the concept of AI skills: just as humans must learn to curate and judge AI outputs, organisations must curate the instructions and best practices that guide AI systems themselves.

Fadel distinguishes between three types of knowledge: declarative (facts and figures), procedural (how to do things), and conceptual (understanding why). He contends that in an AI age, organisations should prioritise procedural and conceptual knowledge-precisely the elements that constitute effective AI skills. An AI skill is not a collection of facts; it is a curated set of procedures and conceptual frameworks that enable consistent, high-quality performance.

Furthermore, Fadel emphasises what he calls the Drivers-agency, identity, purpose, and motivation-as essential human capacities that cannot be automated. AI skills, in this framework, are tools that free humans from routine tasks so they can focus on these higher-order capacities. By encoding best practices into skills, organisations enable their AI systems to handle specialised work whilst their human teams concentrate on judgment, creativity, and strategic direction.

Fadel’s work also highlights the importance of critical thinking and creativity as priority competencies. These are precisely the capacities required to develop, refine, and validate AI skills. Someone must decide what constitutes a best practice, what guidelines are most relevant, and when a skill requires updating. This curation work is fundamentally creative and critical-it requires immersion in a domain, the ability to distinguish signal from noise, and the judgment to make difficult trade-offs about what to include and what to exclude.

Conclusion

AI skills represent a practical instantiation of curation as a core competency in an AI-driven world. They embody the principle that as machines become more capable at processing information and generating outputs, human value increasingly lies in the ability to curate, judge, and refine. By systematising best practices and domain expertise into reusable skills, organisations create a feedback loop in which AI systems produce higher-quality work, humans can focus on higher-order judgment, and the organisation’s collective knowledge becomes more accessible and trustworthy.

References

1. https://ocasta.com/glossary/internal-comms/ai-driven-content-curation-for-employees/

2. https://www.digitallearninginstitute.com/blog/ai-transformative-effect-on-curating-content

3. https://www.glitter.io/glossary/knowledge-curation

4. https://futureiq.substack.com/p/curate-your-consumption-the-most

5. https://www.gettingsmart.com/2025/09/16/3-human-skills-that-make-you-irreplaceable-in-an-ai-world/

6. https://spencereducation.com/content-curation-ai/

7. https://www.techclass.com/resources/learning-and-development-articles/how-ld-teams-can-curate-smarter-content-with-ai

8. https://ploko.nl/en/knowledge-base/ai-content-curation/

"Skills are essentially curated instructions containing best practices, guidelines, and workflows that AI can reference when performing particular types of work. They're like expert manuals that help AI produce higher-quality outputs for specialised tasks." - Term: AI skills

read more
Quote: Arthur Mensch – Mistral CEO

Quote: Arthur Mensch – Mistral CEO

“The challenge the [AI] industry will face is that we need to get enterprises to value fast enough to justify all of the investments that are collectively being made.” – Arthur Mensch – Mistral CEO

Arthur Mensch, CEO of Mistral AI, captures a pivotal tension in the AI landscape with this observation from his appearance on the Big Technology Podcast hosted by Alex Kantrowitz. Spoken just two days ago on 16 January 2026, the quote underscores the urgency for AI companies to demonstrate tangible returns to enterprises, justifying the colossal investments pouring into compute, data, and talent across the sector1,3,4,5.

Who is Arthur Mensch?

Born in 1984, Arthur Mensch is a French entrepreneur and AI researcher whose career trajectory positions him at the forefront of Europe’s AI ambitions. A graduate of the prestigious Ecole Polytechnique and École Normale Supérieure, he honed his expertise at Google DeepMind, where he contributed to foundational work in large language models. In 2023, Mensch co-founded Mistral AI alongside Guillaume Lample and Timothée Lacroix, both former Meta AI researchers frustrated with closed-source strategies at their prior employers. Mistral quickly emerged as a European powerhouse, releasing efficient open-source models that rival proprietary giants like OpenAI, while building an enterprise platform for custom deployments on private clouds and sovereign infrastructure1,3,4,5.

Mensch’s leadership emphasises efficiency over brute-force scaling. Early Mistral models prioritised training optimisation, enabling competitive performance with fewer resources. The company has raised significant funding to scale compute, yet Mensch stresses practical challenges: data shortages as a greater bottleneck than hardware, and the need for tools enabling enterprise integration, evaluation, and customisation2,3,4. He advocates open-source as a path to secure, evaluable AI, countering narratives blending existential risks with practical concerns like bias control and deployment safety3.

Context of the Quote

Delivered amid booming AI investments, Mensch’s remark addresses a core industry paradox. While headlines chase compute races, Mistral focuses on monetisation through enterprise solutions-connecting models to proprietary data, ensuring compliance, and delivering use cases. He notes enterprises struggle with AI pilots: lacking continuous integration tools, reliable agent deployment, and user-friendly customisation. Success demands proving value swiftly, as scaling models alone does not guarantee profitability3,4. This aligns with Mistral’s model: open-source foundations paired with paid enterprise orchestration, appealing to European governments wary of US hyperscaler dependence5.

Mensch dismisses hype around mass job losses, rebutting Anthropic’s Dario Amodei by calling such claims overstated marketing. Instead, he warns of ‘deskilling’-over-reliance eroding critical thinking-mitigable via thoughtful design preserving human agency1. He critiques obsessions with AI surpassing human intelligence as quasi-religious, prioritising controllable, relational tasks where humans excel6.

Leading Theorists on AI Commoditisation and Enterprise Value

The quote resonates with theorists analysing AI’s commoditisation, where models become utilities akin to cloud compute, pressuring differentiation via enterprise value.

  • Elon Musk and OpenAI origins: Musk co-founded OpenAI in 2015 warning of AGI risks, but pivoted to closed-source ChatGPT, sparking commoditisation debates. His xAI pushes open alternatives, echoing Mistral’s ethos3.
  • Yann LeCun (Meta): Chief AI Scientist advocates open-source scaling laws, arguing commoditised models democratise access but demand enterprise customisation for value-mirroring Mistral’s data-connected platforms4.
  • Andrej Karpathy (ex-OpenAI/Tesla): Emphasises ‘software 2.0’ where models commoditise via fine-tuning; enterprises must build defensible moats through proprietary data and agents, as Mensch pursues3.
  • Dario Amodei (Anthropic): Contrasts Mensch by forecasting rapid white-collar displacement, yet both agree on deployment hurdles; Amodei’s safety focus highlights evaluation tools Mensch deems essential1.
  • Sam Altman (OpenAI): Drives enterprise via ChatGPT Enterprise, validating Mensch’s call for fast value capture amid trillion-dollar investments4.

These figures converge on a truth: AI’s future hinges not on model size, but on solving enterprise adoption-verifiable ROI, secure integration, and human-augmented workflows. Mensch’s insight, from a CEO scaling Europe’s AI contender, illuminates this path.

References

1. https://timesofindia.indiatimes.com/technology/tech-news/mistral-ai-ceo-arthur-mensch-warns-of-ai-deskilling-people-its-a-risk-that-/articleshow/122018232.cms

2. https://thisweekinstartups.com/episodes/KFfVAKTPqcz

3. https://blog.eladgil.com/p/discussion-w-arthur-mensch-ceo-of

4. https://www.youtube.com/watch?v=Z5H0Jl4ohv4

5. https://africa.businessinsider.com/news/a-leading-european-ai-startup-says-its-edge-over-silicon-valley-isnt-better-tech-its/3jft3sf

6. https://fortune.com/europe/article/mistral-boss-tech-ceos-obsession-ai-outsmarting-humans-very-religious-fascination/

"The challenge the [AI] industry will face is that we need to get enterprises to value fast enough to justify all of the investments that are collectively being made." - Quote: Arthur Mensch

read more
Quote: Alap Shah – Lotus CIO, Citrini report co-author

Quote: Alap Shah – Lotus CIO, Citrini report co-author

“Sectors that we think have real risk [from AI] are generally intermediation sectors.” – Alap Shah – Lotus CIO, Citrini report co-author

Alap Shah, Chief Investment Officer at Lotus Technology Management and co-author of the influential Citrini Research report The 2028 Global Intelligence Crisis, issued this stark warning amid growing market unease over artificial intelligence’s transformative power. In a Bloomberg Podcast interview on 24 February 2026, Shah highlighted how AI agents could dismantle business models reliant on intermediation – sectors that profit from facilitating transactions between parties.1,2,4

Alap Shah’s Background and Expertise

Alap Shah serves as CIO at Lotus Technology Management, a firm focused on navigating technological disruptions in global markets. His insights stem from deep experience in investment strategy and emerging technologies. Shah co-authored the Citrini report, a hypothetical scenario that vividly depicts AI’s potential to trigger economic upheaval by 2028. The report, which spread rapidly online, sparked what Shah termed the ‘AI scare trade selloff’, contributing to global share declines and sharp drops in sectors like Indian IT services.1,3,5

Shah’s analysis emphasises AI’s capacity to erode ‘friction-based’ moats. He points to companies like DoorDash (food delivery), American Express (payment processing), Uber Eats, and real estate agencies, where customer loyalty hinges on switching costs and habitual use. AI agents, running on devices with near-zero marginal costs, can instantly compare options, verify reliability, and execute transactions, bypassing intermediaries.1,2,4

The Citrini Report: A Hypothetical Crisis Scenario

Published by Citrini Research, The 2028 Global Intelligence Crisis outlines a timeline beginning in mid-2027 with AI-driven defaults in private equity-backed software firms, escalating to widespread intermediation collapse. Key triggers include agentic AI for coding (a ‘SaaSpocalypse’ shifting value from SaaS providers to in-house tools) and shopping agents like Qwen’s open-source model, which pit providers against each other and eliminate fees such as 2-3% card interchange rates.2,4

The report predicts a ‘ghost GDP’ from mass white-collar layoffs – potentially 5% within 18 months in the US – creating a negative feedback loop: job cuts reduce spending, pressuring firms to invest more in AI, accelerating disruption. Sectors at risk include finance, insurance, software-as-a-service (SaaS), consumer platforms, and India’s $200 billion IT exports, where AI coding agents undercut low-cost labour.1,4,5,6

India faces particular vulnerability, with the report forecasting an 18% rupee depreciation and IMF discussions by Q1 2028 as services surplus evaporates.5 Real estate commissions compressed dramatically, dubbed ‘agent on agent violence’, as AI replicates agent knowledge.4

Shah’s Policy Prescriptions

To avert downturn, Shah urges taxing AI ‘windfall gains’ or inference compute, funding transfers for displaced workers via proposals like the ‘Transition Economy Act’ or ‘Shared AI Prosperity Act’. Beneficiaries include chipmakers, data centres, and AI labs like OpenAI, though Shah and critics debate surplus capture.1,3,4,6

Leading Theorists on AI Disruption and Intermediation

Shah’s views build on economists and thinkers analysing platform economics and automation:

  • Erik Brynjolfsson and Andrew McAfee (MIT): In The Second Machine Age (2014), they argue digital technologies disproportionately boost skilled workers while automating routine tasks, widening inequality – a precursor to Citrini’s white-collar focus.[No specific search result; general knowledge]
  • Vitalik Buterin: Ethereum co-founder, referenced in critiques for decentralised trust solutions (e.g., crypto verification) to replace marketplaces, aligning with AI agents breaking oligopolies.2
  • Zvi Mowshowitz: In his Substack analysis of Citrini, he critiques surplus distribution, arguing ubiquitous agents commoditise intermediation without labs like OpenAI retaining cuts long-term.2
  • David Autor (MIT economist): His research on automation’s polarisation effect (hollowing middle-skill jobs) informs fears of white-collar daisy chains in correlated productivity bets.[No specific search result; general knowledge]

These theorists underscore AI’s dual nature: efficiency gains versus systemic risks, echoing Shah’s call for intervention.2

Market Reaction and Ongoing Debate

The report’s release fuelled unease, with Nifty IT dropping 3.6% and broader selloffs. Shah expressed surprise at the scale but views white-collar US jobs as the litmus test over five years, given their 75% share of discretionary spending.3,5,6

References

1. https://www.startuphub.ai/ai-news/technology/2026/ai-s-scare-trade-fuels-market-unease

2. https://thezvi.substack.com/p/citrinis-scenario-is-a-great-but

3. https://www.tradingview.com/news/invezz:1dd9f8177094b:0-citrini-report-co-author-urges-ai-tax-after-report-sparks-sell-off/

4. https://www.citriniresearch.com/p/2028gic

5. https://www.firstpost.com/explainers/ai-boom-mass-layoffs-citrini-research-report-economy-impact-13983257.html

6. https://www.business-standard.com/world-news/citrini-report-author-urges-ai-tax-to-cushion-job-losses-in-united-states-126022500017_1.html

"Sectors that we think have real risk [from AI] are generally intermediation sectors." - Quote: Alap Shah - Lotus CIO, Citrini report co-author

read more

Download brochure

Introduction brochure

What we do, case studies and profiles of some of our amazing team.

Download

Our latest podcasts on Spotify

Sign up for our newsletters - free

Global Advisors | Quantified Strategy Consulting