Select Page

Global Advisors | Quantified Strategy Consulting

SMPostStory
Term: AI harness

Term: AI harness

“A harness (often called an agent harness or agentic harness) is an external software framework that wraps around a Large Language Model (LLM) to make it functional, durable, and capable of taking actions in the real world.” – AI harness

An AI harness is the external software framework that wraps around a Large Language Model (LLM) to extend its capabilities beyond text generation, enabling it to function as a persistent, tool-using agent capable of taking real-world actions. Without a harness, an LLM operates in isolation-processing a single prompt and generating a response with no memory of previous interactions and no ability to interact with external systems. The harness solves this fundamental limitation by providing the infrastructure necessary for autonomous, multi-step reasoning and execution.

Core Functions and Architecture

An AI harness performs several critical functions that transform a static language model into a dynamic agent. Memory management addresses one of the most significant constraints of raw LLMs: their fixed context windows and lack of persistent memory. Standard language models begin each session with no recollection of previous interactions, forcing them to operate without historical context. The harness implements memory systems-including persistent context logs, summaries, and external knowledge stores-that carry information across sessions, enabling the agent to learn from past experiences and maintain continuity across multiple interactions.

Tool execution and external action represents another essential function. Language models alone can only produce text; they cannot browse the web, execute code, query databases, or generate images. The harness monitors the model’s output for special tool-call commands and executes those operations on the model’s behalf. When a tool call is detected, the harness pauses text generation, executes the requested operation in the external environment (such as performing a web search or running code in a sandbox), and feeds the results back into the model’s context. This mechanism effectively gives the model “hands and eyes,” transforming textual intentions into tangible real-world actions.

Context management and orchestration ensure that information flows efficiently between the model and its environment. The harness determines what information is provided to the model at each step, managing the transient prompt whilst maintaining a persistent task log separate from the model’s immediate context. This separation is crucial for long-running projects: even if an AI agent instance stops and a new one begins later with no memory in the raw LLM, the project itself retains memory through files and logs maintained by the harness.

Modular Design and Components

Contemporary harness architectures increasingly adopt modular designs that decompose agent functionality into interchangeable components. Research from ICML 2025 on “General Modular Harness for LLM Agents in Multi-Turn Gaming Environments” demonstrates this approach through three core modules: perception, which processes both low-resolution grid environments and visually complex images; memory, which stores recent trajectories and synthesises self-reflection signals enabling agents to critique past moves and adjust future plans; and reasoning, which integrates perceptual embeddings and memory traces to produce sequential decisions. This modular structure allows developers to toggle components on and off, systematically analysing each module’s contribution to overall performance.

Performance Impact and Practical Benefits

The empirical benefits of harness implementation are substantial. Models operating within a harness achieve significantly higher task success rates compared to un-harnessed baselines. In gaming environments, an AI with a memory and perception harness wins more games than the same AI without one. In coding tasks, an AI with a harness that runs and debugs its own code completes programming tasks that a standalone LLM would fail due to runtime errors. The harness essentially compensates for the model’s inherent weaknesses-lack of persistence, inability to access external knowledge, and propensity for errors-resulting in markedly improved real-world performance.

Perhaps most significantly, harnesses extend what an AI can accomplish without requiring model retraining. Want an LLM to handle images? Integrate a vision module or image captioning API into the harness. Need mathematical reasoning or complex logic? Add the appropriate tool or module. This extensibility makes harnesses economically valuable: two products using identical underlying LLMs can deliver vastly different user experiences based on the quality and sophistication of their respective harnesses.

Evolution and Strategic Importance

As AI capabilities have advanced, harness design has become increasingly critical to product success. The harness landscape is dynamic and evolving: popular agents like Manus have undergone five complete re-architectures since March 2024, and even Anthropic continuously refines Claude Code’s agent harness as underlying models improve. This reflects a fundamental principle: as models become more capable, harnesses must be continually simplified, stripping away scaffolding and crutches that are no longer necessary.

The distinction between orchestration and harness is worth noting. Orchestration serves as the “brain” of an AI system-determining the overall workflow and decision logic-whilst the harness functions as the “hands and infrastructure,” executing those decisions and managing the technical details. Both are critical for complex AI agents, and improvements in either dimension can dramatically enhance real-world performance.

Related Theorist: Allen Newell and Cognitive Architecture

Allen Newell (1927-1992) was an American cognitive scientist and computer scientist whose theoretical framework profoundly influences contemporary harness design. Newell’s “Unified Theories of Cognition” (UTC), published in 1990, proposed that human cognition operates through integrated systems of perception, memory, and reasoning-three faculties that work in concert to enable intelligent behaviour. This theoretical foundation directly inspired the modular harness architectures now prevalent in AI research.

Newell’s career spanned the emergence of cognitive science as a discipline. Working initially at the RAND Corporation and later at Carnegie Mellon University, he collaborated with Herbert Simon to develop the “Physical Symbol System Hypothesis,” which posited that physical symbol systems (such as computers) could exhibit intelligent behaviour through the manipulation of symbols according to rules. This work earned Newell and Simon the Turing Award in 1975, recognising their foundational contributions to artificial intelligence.

Newell’s UTC represented his mature synthesis of decades of research into human problem-solving, learning, and memory. Rather than treating perception, memory, and reasoning as separate cognitive modules, Newell argued they must be understood as deeply integrated systems operating within a unified cognitive architecture. This insight proved prescient: modern AI harnesses implement precisely this integration, with perception modules processing environmental information, memory modules storing and retrieving relevant context, and reasoning modules synthesising these inputs into coherent action sequences.

The connection between Newell’s theoretical work and contemporary harness design is not merely coincidental. Researchers explicitly cite Newell’s framework when justifying modular harness architectures, recognising that his cognitive science insights provide a principled foundation for engineering AI systems. In this sense, Newell’s work from the 1980s and early 1990s anticipated the architectural requirements that AI engineers would discover empirically decades later when attempting to build capable, persistent, tool-using agents.

References

1. https://parallel.ai/articles/what-is-an-agent-harness

2. https://developer.harness.io/docs/platform/harness-aida/aida-overview

3. https://arxiv.org/html/2507.11633v1

4. https://hugobowne.substack.com/p/ai-agent-harness-3-principles-for

5. https://dxwand.com/boost-business-ai-harness-llms-nlp-nlu/

6. https://www.anthropic.com/engineering/effective-harnesses-for-long-running-agents

"A harness (often called an agent harness or agentic harness) is an external software framework that wraps around a Large Language Model (LLM) to make it functional, durable, and capable of taking actions in the real world." - Term: AI harness

read more
Quote: Clayton Christensen

Quote: Clayton Christensen

“When I have my interview with God, our conversation will focus on the individuals whose self-esteem I was able to strengthen, whose faith I was able to reinforce, and whose discomfort I was able to assuage – a doer of good, regardless of what assignment I had. These are the metrics that matter in measuring my life.” – Clayton Christensen – Author

Clayton M. Christensen, the renowned Harvard Business School professor and author, encapsulated a lifetime of reflection in this poignant reflection on true success. Drawn from his seminal book How Will You Measure Your Life?, published in 2012, the quote emerges from Christensen’s classroom exercise where he challenged students to confront life’s deepest questions: How can I ensure happiness in my career? How can I nurture enduring family relationships? And how can I avoid moral pitfalls that lead to downfall?1,2,3

Christensen’s Life and Intellectual Journey

Born in 1952 in Salt Lake City, Utah, Christensen rose from humble roots to become one of the most influential management thinkers of his generation. A devout member of The Church of Jesus Christ of Latter-day Saints, he infused his work with ethical considerations, often drawing parallels between business strategy and personal integrity. He earned a DBA from Harvard Business School in 1992, where he later became the Kim B. Clark Professor of Business Administration.3,7

Christensen’s breakthrough came with The Innovator’s Dilemma (1997), which introduced the theory of disruptive innovation – the idea that established companies often fail by focusing on high-margin customers while upstarts target overlooked markets, eventually upending incumbents. This concept, praised by Steve Jobs as deeply influential, transformed how leaders view competition and change.2 His ideas permeated industries, from technology to healthcare, earning him accolades like the Economist Innovation Award.

Tragedy struck in 2010 when Christensen was diagnosed with leukemia, prompting deeper introspection. Amid treatments, he expanded his final HBS class into How Will You Measure Your Life?, co-authored with James Allworth and Karen Dillon. The book applies rigorous business theories – like marginal cost analysis and resource allocation – to life’s choices, warning against ‘just this once’ compromises that erode integrity over time.3,7 Christensen passed away in 2020, but his emphasis on relationships over achievements endures.

Context of the Quote in ‘How Will You Measure Your Life?’

The quote anchors the book’s core thesis: conventional metrics like wealth or status pale against the impact on others’ lives. Christensen recounted posing these questions to ambitious MBAs, urging them to invest deliberately in relationships, as career peaks fade but personal bonds provide lasting happiness.1,4 He illustrated pitfalls through cases like Nick Leeson, whose minor ethical lapse at Barings Bank spiralled into fraud and ruin, underscoring that 100% adherence to principles is easier than 98%.3

In sections on career and relationships, Christensen advised balancing ambition with family time, using ‘jobs to be done’ theory: people ‘hire’ you for specific roles, like parents modelling values or partners providing support. At life’s end, he argued, success lies in friends who console you, children embodying your values, and a resilient marriage – not accolades.4,5

Leading Theorists on Life Priorities and Fulfilment

Christensen built on a lineage of thinkers prioritising inner metrics over external gains:

  • Viktor Frankl, Holocaust survivor and author of Man’s Search for Meaning (1946), posited that fulfilment stems from purpose and love, not pleasure – influencing Christensen’s focus on meaningful impact.3
  • Abraham Maslow‘s hierarchy of needs culminates in self-actualisation, where self-esteem and relationships foster peak experiences, aligning with Christensen’s relational emphasis.4
  • Martin Seligman, father of positive psychology, advocated measuring life via PERMA (Positive Emotion, Engagement, Relationships, Meaning, Accomplishment), reinforcing that relationships yield the highest wellbeing.2
  • Daniel Kahneman, Nobel laureate, distinguished ‘experiencing self’ (daily highs) from ‘remembering self’ (enduring memories), cautioning that peak achievements matter less retrospectively than sustained bonds.3

These theorists converge on a truth Christensen championed: true leadership – in business or life – measures by upliftment of others, not personal ascent. His framework equips readers to audit priorities, ensuring actions align with eternal metrics of good.1,7

References

1. https://www.ricklindquist.com/notes/how-will-you-measure-your-life

2. https://www.porchlightbooks.com/products/how-will-you-measure-your-life-clayton-m-christensen-9780062102416

3. https://www.library.hbs.edu/working-knowledge/clayton-christensens-how-will-you-measure-your-life

4. https://www.youtube.com/watch?v=qCX6vAvglAI

5. https://chools.in/wp-content/uploads/2021/03/HOW-WILL-YOU-MEASURE-YOUR-LIFE.pdf

6. https://www.deseretbook.com/product/5083635.html

7. https://hbr.org/2010/07/how-will-you-measure-your-life

8. https://www.barnesandnoble.com/w/how-will-you-measure-your-life-clayton-m-christensen/1111558923

“When I have my interview with God, our conversation will focus on the individuals whose self-esteem I was able to strengthen, whose faith I was able to reinforce, and whose discomfort I was able to assuage - a doer of good, regardless of what assignment I had. These are the metrics that matter in measuring my life.” - Quote: Clayton Christensen

read more
Term: Loss function

Term: Loss function

“A loss function, also known as a cost function, is a mathematical function that quantifies the difference between a model’s predicted output and the actual ‘ground truth’ value for a given input.” – Loss function

A loss function is a mathematical function that quantifies the discrepancy between a model’s predicted output and the actual ground truth value for a given input. Also referred to as an error function or cost function, it serves as the objective function that machine learning and artificial intelligence algorithms seek to optimize during training efforts.

Core Purpose and Function

The loss function operates as a feedback mechanism within machine learning systems. When a model makes a prediction, the loss function calculates a numerical value representing the prediction error-the gap between what the model predicted and what actually occurred. This error quantification is fundamental to the learning process. During training, algorithms such as backpropagation use the gradient of the loss function with respect to the model’s parameters to iteratively adjust weights and biases, progressively reducing the loss and improving predictive accuracy.

The relationship between loss function and cost function warrants clarification: whilst these terms are often used interchangeably, a loss function technically applies to a single training example, whereas a cost function typically represents the average loss across an entire dataset or batch. Both, however, serve the same essential purpose of guiding model optimization.

Key Roles in Machine Learning

Loss functions fulfil several critical functions within machine learning systems:

  • Performance measurement: Loss functions provide a quantitative metric to evaluate how well a model’s predictions align with actual results, enabling objective assessment of model effectiveness.
  • Optimization guidance: By calculating prediction error, loss functions direct the learning algorithm to adjust parameters iteratively, creating a clear path toward improved predictions.
  • Bias-variance balance: Effective loss functions help balance model bias (oversimplification) and variance (overfitting), essential for generalisation to new, unseen data.
  • Training signal: The gradient of the loss function provides the signal by which learning algorithms update model weights during backpropagation.

Common Loss Function Types

Different machine learning tasks require different loss functions. For regression problems involving continuous numerical predictions, Mean Squared Error (MSE) and Mean Absolute Error (MAE) are widely employed. The MAE formula is:

\text{MAE} = \frac{1}{n} \sum_{i=1}^{n} \left| y_i - \hat{y}_i \right|

For classification tasks dealing with categorical data, Binary Cross-Entropy (also called Log Loss) is commonly used for binary classification problems. The formula is:

L(y, f(x)) = -[y \cdot \log(f(x)) + (1 - y) \cdot \log(1 - f(x))]

where y represents the true binary label (0 or 1) and f(x) is the predicted probability of the positive class.

For multi-class classification, Categorical Cross-Entropy extends this concept. Additionally, Hinge Loss is particularly useful in binary classification where clear separation between classes is desired:

L(y, f(x)) = \max(0, 1 - y \cdot f(x))

The Huber Loss function provides robustness to outliers by combining quadratic and linear components, switching between them based on a threshold parameter delta (?).

Related Strategy Theorist: Vladimir Vapnik

Vladimir Naumovich Vapnik (born 1935) stands as a foundational figure in the theoretical underpinnings of loss functions and machine learning optimisation. A Soviet and later American computer scientist, Vapnik’s work on Statistical Learning Theory and Support Vector Machines (SVMs) fundamentally shaped how the machine learning community understands loss functions and their role in model generalisation.

Vapnik’s most significant contribution to loss function theory came through his development of Support Vector Machines in the 1990s, where he introduced the concept of the hinge loss function-a loss function specifically designed to maximise the margin between classification boundaries. This represented a paradigm shift in thinking about loss functions: rather than simply minimising prediction error, Vapnik’s approach emphasised confidence and margin, ensuring models were not merely correct but confidently correct by a specified distance.

Born in the Soviet Union, Vapnik studied mathematics at the University of Uzbekistan before joining the Institute of Control Sciences in Moscow, where he conducted groundbreaking research on learning theory. His theoretical framework, Vapnik-Chervonenkis (VC) theory, provided mathematical foundations for understanding how models generalise from training data to unseen examples-a concept intimately connected to loss function design and selection.

Vapnik’s insight that different loss functions encode different assumptions about what constitutes “good” model behaviour proved revolutionary. His work demonstrated that the choice of loss function directly influences not just training efficiency but the model’s ability to generalise. This principle remains central to modern machine learning: data scientists select loss functions strategically to encode domain knowledge and desired model properties, whether robustness to outliers, confidence in predictions, or balanced handling of imbalanced datasets.

Vapnik’s career spanned decades of innovation, including his later work on transductive learning and learning using privileged information. His theoretical contributions earned him numerous accolades and established him as one of the most influential figures in machine learning science. His emphasis on understanding the mathematical foundations of learning-particularly through the lens of loss functions and generalisation bounds-continues to guide contemporary research in deep learning and artificial intelligence.

Practical Significance

The selection of an appropriate loss function significantly impacts model performance and training efficiency. Data scientists carefully consider different loss functions to achieve specific objectives: reducing sensitivity to outliers, better handling noisy data, minimising overfitting, or improving performance on imbalanced datasets. The loss function thus represents not merely a technical component but a strategic choice that encodes domain expertise and learning objectives into the machine learning system itself.

References

1. https://www.datacamp.com/tutorial/loss-function-in-machine-learning

2. https://h2o.ai/wiki/loss-function/

3. https://c3.ai/introduction-what-is-machine-learning/loss-functions/

4. https://www.geeksforgeeks.org/machine-learning/ml-common-loss-functions/

5. https://arxiv.org/html/2504.04242v1

6. https://www.youtube.com/watch?v=v_ueBW_5dLg

7. https://www.ibm.com/think/topics/loss-function

8. https://en.wikipedia.org/wiki/Loss_function

9. https://www.datarobot.com/blog/introduction-to-loss-functions/

"A loss function, also known as a cost function, is a mathematical function that quantifies the difference between a model's predicted output and the actual 'ground truth' value for a given input." - Term: Loss function

read more
Quote: Clayton Christensen

Quote: Clayton Christensen

“The only metrics that will truly matter to my life are the individuals whom I have been able to help, one by one, to become better people.” – Clayton Christensen – Author

Clayton Christensen’s assertion that personal impact-measured through the individuals we help develop-represents the truest metric of a life well-lived stands as a profound counterpoint to the achievement-obsessed culture that dominates modern professional life. This reflection emerges not from abstract philosophy but from decades of observing how talented, ambitious people construct meaning, and from Christensen’s own wrestling with what constitutes genuine success.

The Context: A Harvard Professor’s Reckoning

Christensen, the Thomas Bowers Professor of Business Administration at Harvard Business School and author of the seminal work The Innovator’s Dilemma, developed this perspective through direct engagement with some of the world’s most driven individuals: MBA students at one of the planet’s most competitive institutions. Each year, he posed three deceptively simple questions to his students on the final day of class: How can I be sure I’ll be happy in my career? How can I be sure my relationships with family become an enduring source of happiness? How can I be sure I’ll stay out of jail?

These questions, which form the foundation of his 2012 book How Will You Measure Your Life? (co-authored with James Allworth and Karen Dillon), reveal Christensen’s conviction that conventional metrics of success-wealth, title, achievement-systematically mislead us about what actually generates lasting fulfilment. The book, published by Harper Business, synthesises decades of academic research with personal narrative to argue that well-tested theories from business and psychology can illuminate the path to a meaningful life.

The Danger of Marginal Thinking

Central to Christensen’s argument is his critique of how marginal-cost analysis-a cornerstone of business decision-making-infiltrates personal life with corrosive consequences. He illustrates this through the cautionary tale of Nick Leeson, the trader whose “just this once” decisions ultimately destroyed Barings Bank, a 233-year-old institution, and landed him in prison. Leeson’s descent began with a single small error, hidden in a little-scrutinised trading account. Each subsequent deception seemed a marginal step, yet the cumulative effect was catastrophic.

Christensen argues that we unconsciously apply this same logic to our personal and moral lives. A voice whispers: “I know most people shouldn’t do this, but in this particular extenuating circumstance, just this once, it’s okay.” The price appears alluringly low. Yet life, Christensen observes, presents an endless stream of extenuating circumstances. Once we justify crossing a boundary once, nothing prevents us from crossing it again. The boundary itself-our personal moral line-loses its power.

This insight directly connects to his central claim about measuring life through human development. If we measure success by quarterly results, promotions, or wealth accumulation, we unconsciously permit ourselves small moral compromises that seem justified by marginal analysis. But if we measure success by the individuals we’ve genuinely helped become better people, our decision-making framework shifts entirely. Helping someone develop requires consistency, integrity, and long-term commitment-qualities incompatible with marginal thinking.

The Theoretical Foundations

Christensen’s perspective draws on several streams of organisational and psychological theory. His work on innovation theory-developed through The Innovator’s Dilemma, which Steve Jobs described as “deeply influencing” Apple’s strategy-emphasises how organisations often fail by optimising for present circumstances rather than building capabilities for future challenges. This same principle applies to personal development: we often optimise for immediate achievement rather than building the relational and moral capabilities that sustain meaning across decades.

The book also engages with motivation theory, particularly the distinction between intrinsic and extrinsic motivators. Research in psychology, notably the work of Edward Deci and Richard Ryan on self-determination theory, demonstrates that extrinsic rewards (money, status, recognition) provide temporary satisfaction but rarely generate enduring happiness. Intrinsic motivators-autonomy, mastery, and purpose-create deeper engagement and fulfilment. Christensen argues that helping others develop satisfies all three intrinsic motivators: you exercise agency in how you mentor, you develop mastery in your field, and you connect to a purpose beyond yourself.

Additionally, Christensen draws on research in positive psychology and life satisfaction studies. Longitudinal research, including the Harvard Study of Adult Development (which tracked individuals across decades), consistently demonstrates that the quality of relationships-not career achievement or wealth-predicts life satisfaction and longevity. Christensen synthesises this research with business theory to argue that the mechanism through which relationships generate happiness is precisely through the mutual development of the individuals involved.

The Concept of Being “Hired”

A distinctive element of Christensen’s framework is his concept of being “hired” to do a job in someone’s life. Rather than viewing relationships as passive connections, he suggests we should understand them as ongoing engagements where others, implicitly or explicitly, hire us to fulfil specific roles: mentor, example, confidant, supporter. This reframing transforms how we approach relationships. If your child has hired you to be an example of integrity, your daily choices take on different weight. If your colleague has hired you to help them develop their capabilities, your mentoring becomes a central measure of your professional contribution.

This concept echoes the work of Clayton Alderfer and other organisational psychologists who emphasise the importance of role clarity and psychological contracts in generating satisfaction. But Christensen extends it beyond the workplace into all human relationships, suggesting that clarity about what role we’re playing-and commitment to excellence in that role-generates both happiness for ourselves and genuine development for others.

The Paradox of Achievement

Christensen acknowledges a subtle paradox: those with strong achievement drives-precisely the individuals most likely to attend Harvard Business School-face particular risk. Their ambition, which drives professional success, can simultaneously blind them to what generates lasting happiness. He recounts a personal moment when, as a young man, he faced a choice between attending an important basketball game (where his team needed him) and pursuing a business opportunity. He chose the game, reasoning that his team needed him. They won anyway without him. Yet he later recognised this decision as among the most important of his life-not because of the game’s outcome, but because it established a boundary: relationships matter more than marginal professional gains.

This reflects research on what psychologists call the “arrival fallacy”-the discovery that achieving long-sought goals often fails to generate the anticipated happiness. Christensen argues this occurs because achievement-focused individuals have internalised the wrong metric. They measure success by what they accomplish, when they should measure it by who they’ve helped become.

Implications for Leadership and Mentorship

For leaders and managers, Christensen’s framework suggests a radical reorientation of purpose. Rather than viewing your role primarily through the lens of organisational performance, financial results, or strategic objectives, you might ask: which individuals have I genuinely helped develop? Have I created conditions where they’ve grown in capability, confidence, and character? This doesn’t negate the importance of business results-Christensen emphasises that career provides stability and resources to give to others. But it reorders priorities.

This perspective aligns with contemporary research on authentic leadership and servant leadership, which emphasises that leaders generate the greatest impact-both organisational and personal-when they prioritise the development of those they lead. Research by scholars like James Kouzes and Barry Posner demonstrates that leaders remembered as transformational are those who invested in developing others, not merely those who achieved impressive financial results.

The Long View

Christensen’s metric requires patience and a long temporal horizon. You won’t know if you’ve raised a good son or daughter until twenty years after the bulk of your parenting work. You won’t know if you have true friends until they call to console you during genuine hardship. You won’t know if you’ve built an enduring marriage until you’ve navigated the challenges that cause many relationships to fracture. This stands in sharp contrast to the quarterly earnings reports, annual performance reviews, and immediate feedback loops that dominate modern professional life.

Yet this long view, Christensen argues, is precisely what liberates us from marginal thinking. When you recognise that the true measure of your life will be assessed across decades, the temptation to compromise your principles “just this once” loses its power. The small decision to help someone develop, made consistently over years, compounds into a life of genuine impact. Conversely, the small decision to prioritise marginal professional gain over relational investment, repeated across years, compounds into a life of hollow achievement.

Christensen’s insight ultimately suggests that the question “How will you measure your life?” is not merely philosophical but profoundly practical. It shapes daily decisions about where you invest your time, energy, and integrity. And those daily decisions, accumulated across a lifetime, determine not just your happiness but the legacy you leave: the individuals who became better people because you were present in their lives.

References

1. https://www.ricklindquist.com/notes/how-will-you-measure-your-life

2. https://www.porchlightbooks.com/products/how-will-you-measure-your-life-clayton-m-christensen-9780062102416

3. https://www.library.hbs.edu/working-knowledge/clayton-christensens-how-will-you-measure-your-life

4. https://www.youtube.com/watch?v=qCX6vAvglAI

5. https://chools.in/wp-content/uploads/2021/03/HOW-WILL-YOU-MEASURE-YOUR-LIFE.pdf

6. https://www.deseretbook.com/product/5083635.html

7. https://hbr.org/2010/07/how-will-you-measure-your-life

8. https://www.barnesandnoble.com/w/how-will-you-measure-your-life-clayton-m-christensen/1111558923

“The only metrics that will truly matter to my life are the individuals whom I have been able to help, one by one, to become better people.” - Quote: Clayton Christensen

read more
Term: AI scaffolding

Term: AI scaffolding

“Scaffolding refers to the structured architecture and instructional techniques built around an AI model to enhance its reasoning, reliability, and capability.” – AI scaffolding

AI scaffolding is the structured architecture and tooling built around a large language model (LLM) to enable it to perform complex, goal-driven tasks with enhanced reasoning, reliability, and capability.1 Rather than relying on a single prompt or query, scaffolding places an LLM within a control loop that includes memory systems, external tools, decision logic, and feedback mechanisms, allowing the model to observe its environment, call APIs or code, update its context, and iterate until goals are achieved.1

In essence, scaffolding bridges the critical gap between the capabilities of base models and production-ready systems. A standalone LLM lacks the architectural support needed to reliably complete multi-step tasks, interface with business systems, or adapt to domain-specific requirements.1 Scaffolding augments the model’s bare capabilities by providing access to tools, domain data, and structured workflows that guide and extend its behaviour.

Core Components of AI Scaffolding

Effective scaffolding operates through several interconnected layers:

  • Planning and reasoning: Agents operate through defined reasoning and evaluation steps. Rather than acting immediately, scaffolding may prompt the model to plan or reflect before taking action, and to self-critique its outputs. Research demonstrates that allowing agents to plan and self-evaluate significantly improves problem-solving accuracy compared to action-only approaches.1
  • Tool integration: The LLM is wrapped in code that interprets its outputs as tool calls. When the model determines it needs external resources-such as a calculator, database query, API call, or web search-the scaffold safely executes that tool and returns results to the model for the next reasoning step.1
  • Memory systems: Scaffolding includes mechanisms for the agent to maintain and update context across multiple interactions, enabling it to build upon previous observations and decisions.1
  • Feedback and control: Robust agents include feedback loops and safeguards such as self-evaluation steps, human-in-the-loop checks, and policy enforcement. In enterprise settings, scaffolding adds logging, testing suites, and guardrails like content filters to ensure outputs remain controlled and auditable.1

Types of AI Scaffolding Techniques

AI scaffolding encompasses several distinct approaches, which can be combined to enhance model performance:

  • Tool access scaffolding: Granting models access to external tools such as code editors, web browsers, or specialised software significantly expands their problem-solving capabilities. For example, LLMs initially trained on finite datasets with fixed cut-off dates became substantially more capable when granted internet access.2
  • Agent loop scaffolding: This technique automates multi-step task completion by placing AI models in a loop with access to their own observations and actions, enabling them to self-generate each prompt needed to finish complex tasks. Systems like AutoGPT exemplify this approach.2
  • Multi-agent scaffolding: Multiple AI models collaborate on complex problems through dialogue, division of labour, or critique mechanisms. Research shows that extended networks of up to a thousand agents can coordinate to outperform individual models, with capability scaling predictably as networks grow larger.2
  • Procedural scaffolding: This approach builds a structured process in which the model generates outputs, checks them, and revises them iteratively, enforcing process discipline rather than relying on raw prompts alone.3
  • Semantic scaffolding: Using ontological frameworks and domain rules to validate outputs against formal relations, preventing deeper misunderstandings and moving AI closer to auditable, trustworthy reasoning.3

Practical Applications and Enterprise Use

Scaffolding is essential for operationalising LLMs in enterprise environments. Whether an agent is expected to generate structured outputs, interact with APIs, or solve problems through planning and iteration, its effectiveness depends on the scaffold that guides and extends its behaviour.1 In sectors such as customer service, risk analysis, logistics, healthcare, and finance, scaffolding enables AI systems to maintain reliability and auditability in high-stakes contexts.3

A key advantage of scaffolding is that it improves accuracy whilst making AI reasoning more transparent. When a system reaches a conclusion, leaders can trace it back to formal relations in an ontology rather than relying solely on statistical inference, making the system trustworthy for critical applications.3

Scaffolding versus Model Scale

An important principle in modern AI development is that scaffolding often matters more than raw model scale. The future of AI-whether in homeland security, finance, healthcare, or other domains-will be defined not by the size of models but by the quality of the architectural frameworks surrounding them.3 Hybrid architectures that embed statistical models within well-designed scaffolded systems deliver superior performance and reliability compared to simply scaling larger models without structural support.

Key Theorist: Stuart Russell and the Alignment Research Tradition

The conceptual foundations of AI scaffolding are deeply rooted in the work of Stuart Russell, a leading figure in artificial intelligence safety and alignment research. Russell, the Volgenau Chair of Engineering at the University of California, Berkeley, and co-author of the seminal textbook Artificial Intelligence: A Modern Approach, has been instrumental in developing frameworks for ensuring AI systems remain controllable and aligned with human values as they become more capable.

Russell’s contributions to scaffolding theory emerge from his broader research agenda on AI safety and the control problem. In the early 2000s, as machine learning systems began to demonstrate increasing autonomy, Russell recognised that simply building more powerful models without corresponding advances in control architecture would create dangerous misalignment between AI capabilities and human oversight. His work emphasised that the architecture surrounding an AI system-not merely the model itself-determines whether that system can be safely deployed in high-stakes environments.

One of Russell’s most influential contributions to scaffolding concepts is his work on iterated amplification, developed in collaboration with researchers at OpenAI and other institutions. Iterated amplification is a form of scaffolding that uses multi-AI collaborations to solve increasingly complex problems whilst maintaining human oversight at each stage. In this approach, humans decompose complex tasks into simpler subtasks that AI systems solve, then humans review and synthesise these solutions. Over time, humans operate at progressively higher levels of abstraction whilst AI systems assume responsibility for more of the process. This iterative cycle improves model capabilities whilst preserving human auditability and control-a principle directly aligned with scaffolding’s core objective.2

Russell’s broader philosophical stance is that AI safety and capability enhancement are not opposing forces but complementary objectives. Scaffolding embodies this principle: by building structured architectures around models, developers simultaneously enhance capability (through tool access, planning, and feedback loops) and improve safety (through auditability, human-in-the-loop checks, and formal validation against domain rules). Russell’s insistence that AI systems must remain interpretable and auditable has directly influenced how modern scaffolding frameworks incorporate semantic validation, ontological constraints, and transparent reasoning pathways.

Throughout his career, Russell has advocated for what he terms “beneficial AI”-systems designed from inception to be controllable, transparent, and aligned with human values. Scaffolding represents a practical instantiation of this vision. Rather than hoping that larger models will somehow become more trustworthy, Russell’s framework suggests that intentional architectural design-the very essence of scaffolding-is the path to AI systems that are simultaneously more capable and more reliable.

Russell’s influence extends beyond theoretical work. His research group at Berkeley has contributed to developing practical frameworks for AI governance, model evaluation, and safety testing that directly inform how organisations implement scaffolding in production environments. His emphasis on formal methods, constraint satisfaction, and human-AI collaboration has shaped industry standards for building enterprise-grade AI systems.

References

1. https://zbrain.ai/agent-scaffolding/

2. https://blog.bluedot.org/p/what-is-ai-scaffolding

3. https://www.cio.com/article/4076515/beyond-ai-prompts-why-scaffolding-matters-more-than-scale.html

4. https://www.godofprompt.ai/blog/what-is-prompt-scaffolding

5. https://kpcrossacademy.ua.edu/scaffolding-ai-as-a-learning-collaborator-integrating-artificial-intelligence-in-college-classes/

6. https://www.tandfonline.com/doi/full/10.1080/10494820.2025.2470319

"Scaffolding refers to the structured architecture and instructional techniques built around an AI model to enhance its reasoning, reliability, and capability." - Term: AI scaffolding

read more
Quote: Clayton Christensen

Quote: Clayton Christensen

“I had thought the destination was what was important, but it turned out it was the journey.” – Clayton Christensen – Author

Clayton M. Christensen, the renowned Harvard Business School professor and author, encapsulated a profound shift in perspective with this reflection from his seminal work How Will You Measure Your Life? Published in 2010, the book draws on his business theories to offer timeless guidance on personal fulfilment, urging readers to prioritise meaningful processes over mere endpoints in life and career.1,2

Who Was Clayton Christensen?

Born in 1952 in Salt Lake City, Utah, Christensen rose from humble beginnings to become one of the most influential thinkers in modern business. A devout member of The Church of Jesus Christ of Latter-day Saints, he integrated his faith with rigorous scholarship. He earned a BA from Brigham Young University, an MPhil from Oxford as a Rhodes Scholar, and both an MBA and DBA from Harvard Business School.

Christensen’s breakthrough came with The Innovator’s Dilemma (1997), introducing **disruptive innovation** – the theory that established companies often fail by focusing on high-end customers, allowing nimble entrants to dominate lower markets and eventually upscale.3 This framework reshaped industries like technology and healthcare. He authored over a dozen books, consulted for global firms, and taught at Harvard for decades until his death in January 2020 from complications of leukemia.

Despite professional acclaim, Christensen’s later years emphasised personal integrity. He famously resisted ‘just this once’ compromises, a principle he credited for his life’s direction: ‘Resisting the temptation whose logic was ‘In this extenuating circumstance, just this once, it’s OK’ has proven to be one of the most important decisions of my life.’3,6

Context of the Quote in How Will You Measure Your Life?

The book stems from Christensen’s 2010 Harvard MBA commencement address, expanded into chapters blending business strategy with life lessons. He warns against common traps: chasing rewards that scream loudest, neglecting family for career, or measuring success by wealth alone. Instead, he advocates allocating resources – time, energy, talent – towards aspirations.4,5,6

This quote emerges in discussions of motivation and growth. Christensen reflects that true satisfaction arises not from arriving at goals, but from the daily pursuit of meaningful work, learning, and relationships. He writes: ‘In order to really find happiness, you need to continue looking for opportunities that you believe are meaningful, in which you will be able to learn new things, to succeed, and be given more and more responsibility to shoulder.’3,4 The journey, rich with motivators like progress and teamwork, forges character and joy.

Leading Theorists on Life Priorities and the Journey Metaphor

Christensen’s insight echoes ancient and modern thinkers who elevate process over outcome.

  • Aristotle (384-322 BC): In Nicomachean Ethics, he defined eudaimonia (flourishing) as a life of virtuous activity, not transient pleasures. Habits formed in daily practice, not endpoints, cultivate excellence.
  • Lao Tzu (6th century BC): The Tao Te Ching states, ‘A journey of a thousand miles begins with a single step.’ Taoist philosophy prizes harmonious flow (wu wei) over forced achievement.
  • Viktor Frankl (1905-1997): Holocaust survivor and Man’s Search for Meaning author argued meaning emerges through attitude amid suffering. Logotherapy posits purpose in every moment’s choices, prioritising inner journey.
  • Mihaly Csikszentmihalyi (1934-2021): Pioneer of **flow theory** in Flow: The Psychology of Optimal Experience (1990). Peak experiences occur in immersive tasks matching skill and challenge – the essence of valuing journey.
  • Daniel Kahneman (1934-2024): Nobel-winning psychologist distinguished ‘experiencing self’ (moment-to-moment) from ‘remembering self’ (end results). In Thinking, Fast and Slow, he showed people often overvalue peaks and endpoints, neglecting the journey’s sum.

These theorists converge on Christensen’s message: life’s value lies in intentional, principle-driven paths. As he noted, ‘The only metrics that will truly matter to my life are the individuals whom I have been able to help, one by one, to become better people.’3,5

Enduring Relevance

In an era of hustle culture and metric-driven success, Christensen’s words challenge us to recalibrate. His life exemplified this: battling illness while mentoring students, he measured legacy by impact, not accolades. This quote invites reflection – are we journeying with purpose, or merely racing to destinations that may disappoint?

References

1. https://quotefancy.com/quote/1849082/Clayton-M-Christensen-I-had-thought-the-destination-was-what-was-important-but-it-turned

2. https://www.goodreads.com/quotes/6847238-i-had-thought-the-destination-was-what-was-important-but

3. https://www.toolshero.com/toolsheroes/clayton-christensen/

4. https://www.club255.com/p/book-byte-98-how-will-you-measure

5. https://rochemamabolo.wordpress.com/2017/11/26/book-review-how-will-you-measure-your-life-by-clayton-christensen/

6. https://www.goodreads.com/author/quotes/1792.Clayton_M_Christensen

7. https://www.claudioperfetti.com/all/how-will-you-measure-your-life/

8. https://quirky-quests.com/ls-clayton-christensen/

“I had thought the destination was what was important, but it turned out it was the journey.” - Quote: Clayton Christensen

read more
Quote: Brian Moynihan – Bank of America CEO

Quote: Brian Moynihan – Bank of America CEO

“You can see upwards of $6 trillion in deposits flow off the liabilities of a banking system… into the stablecoin environment… they’re either not going to be able to loan or they’re going to have to get wholesale funding and that wholesale funding will come at a cost that will increase the cost of borrowing.” – Brian Moynihan – Bank of America CEO

In the rapidly evolving landscape of digital finance, Brian Moynihan, CEO of Bank of America, issued a stark warning during the bank’s Q4 2025 earnings call on 15 January 2026. He highlighted the potential for up to $6 trillion in deposits – roughly 30% to 35% of total US commercial bank deposits – to shift from traditional banking liabilities into the stablecoin ecosystem if regulators permit stablecoin issuers to pay interest.1,2

Context of the Quote

Moynihan’s comments arose amid intense legislative debates over stablecoin regulation in the United States. With US commercial bank deposits standing at $18.61 trillion in January 2026 and the stablecoin market capitalisation at just $315 billion, the scale of this projected outflow underscores a profound threat to the fractional reserve banking model.1 Banks rely on low-cost customer deposits to fund loans to households and businesses, especially small and mid-sized enterprises. A mass migration to interest-bearing stablecoins would cripple lending capacity or force reliance on pricier wholesale funding, thereby elevating borrowing costs across the economy.1,2

This concern echoes broader industry pushback. Executives from JPMorgan and Bank of America have criticised proposals allowing stablecoin yields or rewards, viewing them as direct competition. A US Senate bill aimed at formalising cryptocurrency regulation has stalled amid lobbying from the American Bankers Association, which seeks to prohibit interest on stablecoins. Meanwhile, the GENIUS Act, signed by President Donald Trump in July 2025, marked the first explicit crypto legislation, spurring financial institutions to enter the space while intensifying turf wars as crypto firms pursue banking charters.3

Who is Brian Moynihan?

Brian Moynihan has led Bank of America since January 2010, steering the institution through post-financial crisis recovery, digital transformation, and now the crypto challenge. A Harvard Law graduate with a prior stint at FleetBoston Financial, Moynihan expanded BofA’s wealth management and consumer banking arms, growing assets to over $3 trillion. His tenure has emphasised regulatory compliance and innovation, yet he remains vocal on threats like stablecoins that could disrupt deposit stability.1,2

Backstory on Leading Theorists in Stablecoins and Banking Disruption

The stablecoin phenomenon builds on foundational ideas from monetary theorists and crypto pioneers who envisioned programmable money challenging centralised banking.

  • Satoshi Nakamoto: The pseudonymous creator of Bitcoin in 2008 laid the groundwork by introducing decentralised digital currency, free from central bank control. Bitcoin’s volatility spurred stablecoins as a bridge to everyday use.1
  • Vitalik Buterin: Ethereum’s co-founder (2015) enabled smart contracts, powering algorithmic stablecoins like DAI. Buterin’s vision of decentralised finance (DeFi) posits stablecoins as superior stores of value with yields from on-chain protocols, bypassing banks.3
  • Milton Friedman: The Nobel laureate’s 1969 proposal for a computer-based money system with fixed supply prefigured stablecoins. Friedman argued such systems could curb inflation better than fiat, influencing modern dollar-pegged tokens like USDT and USDC.1
  • Hayek and Free Banking Theorists: Friedrich Hayek’s Denationalisation of Money (1976) advocated competing private currencies, a concept realised in stablecoins issued by firms like Tether and Circle. This challenges the state’s monopoly on money issuance.3
  • Crypto Economists like Jeremy Allaire (Circle CEO): Allaire champions stablecoins as ‘internet-native money’ for payments and remittances, arguing they offer efficiency banks cannot match. His firm issues USDC, now integral to global transfers.1,3

These thinkers collectively argue that stablecoins democratise finance, offering transparency, yield, and borderless access. Yet banking leaders like Moynihan counter that without safeguards, this shift risks systemic instability by eroding the deposit base that fuels economic growth.2

Implications for Finance

Moynihan’s forecast spotlights a pivotal regulatory crossroads. Permitting interest on stablecoins could accelerate adoption, potentially reshaping payments, lending, and funding markets. Banks lobby for restrictions to preserve their model, while crypto advocates push for innovation. As frameworks like the GENIUS Act evolve, the battle over $6 trillion in deposits will define the interplay between traditional finance and blockchain.1,3

References

1. https://www.binance.com/sv/square/post/35227018044185

2. https://www.idnfinancials.com/news/60480/bofa-ceo-stablecoins-pay-interest-us6tn-in-bank-deposits-at-risk

3. https://www.emarketer.com/content/stablecoin-rules-jpmorgan-bofa-interest

"You can see upwards of $6 trillion in deposits flow off the liabilities of a banking system... into the stablecoin environment... they're either not going to be able to loan or they're going to have to get wholesale funding and that wholesale funding will come at a cost that will increase the cost of borrowing." - Quote: Brian Moynihan - Bank of America CEO

read more
Term: Right to Win

Term: Right to Win

“The ‘Right to Win’ (RTW) is a company’s unique, sustainable ability to succeed in a specific market by leveraging superior capabilities, products, and a differentiated ‘way to play’ that outperform competitors, giving them a better-than-even chance of creating value and growth.” – Right to Win

A company’s right to win is the recognition that it is better prepared than its competitors to attract and keep the customers it cares about, grounded in a sustainable competitive advantage that extends beyond short-term market positioning.1 This concept represents more than simply having superior resources; it is the ability to engage in any competitive market with a better-than-even chance of success consistently over time.3 The right to win emerges when a company aligns three interlocking strategic elements: a differentiated way to play, a robust capabilities system, and product and service fit that work together coherently.1

The Three Pillars of Right to Win

The foundation of a right to win rests on understanding what your company can do better than anyone else. Rather than pursuing growth indiscriminately across multiple areas, successful organisations focus on identifying three to six differentiating capabilities-the interconnected people, knowledge, systems, tools and processes that create distinctive value to customers.1,5 These capabilities differ fundamentally from assets; whilst assets such as facilities, machinery, and supplier connections can be replicated by competitors, capabilities cannot.1 The critical question becomes: “What do we do well to deliver value?”1

A well-developed way to play represents a chosen position in a market, grounded in understanding your capabilities and where the market is heading.1 This positioning must fulfil four essential criteria: there must be a market that values your approach; it must be differentiated from competitors’ ways to play; it must remain relevant given expected industry changes; and it must be supported by your capabilities system, making it feasible.1 Finally, the product and service fit ensures that offerings are directly aligned with the capabilities system, delivering superior returns to shareholders.1

Coherence acts as the binding agent across these three elements.1 Achieving alignment with one or even two elements proves insufficient; only when all three synchronise with one another and with the right market conditions can a company truly claim a sustainable right to win.1

Building and Sustaining Competitive Advantage

The right to win is not inherited; it is earned through strategic alignment and disciplined execution.2 This requires an in-depth understanding of the competitive landscape, customer expectations, and team capabilities.2 A strategy that leverages unique assets or insights creates a competitive moat, making it challenging for competitors to catch up, though execution remains where many organisations falter.2

Innovation and adaptability prove essential to sustaining this advantage.2 Organisations that continuously evolve, anticipate market shifts, and adapt their goods and services accordingly are more likely to maintain their competitive edge.2 This does not mean chasing every new trend but rather maintaining a keen sense of which innovations align with core competencies and long-term vision.2 Building a culture of excellence-attracting and nurturing top talent, fostering continuous improvement, and encouraging innovation-represents an often-overlooked yet significant asset in securing the right to win.2

Strategic Applications and Growth Pathways

Right-to-win strategies fall into four categories: customer-driven, capability-driven, value-chain-based, and those building on disruptive business models or technologies.4 The most utilised approach involves fulfilling unmet needs for existing customers that the core business does not currently address.4 However, the strategy delivering the biggest revenue gains involves leveraging core business capabilities-such as patents, technological know-how, or brand equity-to expand into adjacent and breakout businesses.4 Companies successfully utilising two or more right-to-win strategies to move into adjacent markets delivered 12 percentage points higher excess total shareholder return versus their subindustry peers.4

Assessing Your Right to Win

Organisations can evaluate their right to win through systematic analysis. This involves identifying the two most relevant competitors, determining three to six differentiating capabilities required for success, listing key assets and table-stakes activities, and rating performance across these dimensions.5 Differentiating capabilities should be specific and interconnected rather than merely listing functions or organisational units.5 For example, one of Apple’s differentiating capabilities is “innovation around customer interfaces to create better communications and entertainment experiences.”5 Assets, whilst less sustainable than capabilities, represent criteria important to the market and warrant inclusion in competitive assessment.5

Related Theorist: C.K. Prahalad and the Core Competence Framework

The concept of right to win draws significantly from the work of C.K. Prahalad (1941-2010), an influential Indian-American business theorist and consultant who fundamentally shaped modern strategic thinking through his development of the core competence framework. Prahalad’s seminal 1990 Harvard Business Review article, co-authored with Gary Hamel, “The Core Competence of the Corporation,” introduced the revolutionary idea that organisations should identify and leverage their unique, hard-to-imitate capabilities rather than pursuing diversification across unrelated business areas.1

Born in Bangalore, India, Prahalad earned his undergraduate degree in physics and mathematics before pursuing business education. He spent much of his career at the University of Michigan’s Ross School of Business, where he conducted extensive research on strategic management and organisational capability. His work challenged the prevailing strategic orthodoxy of the 1980s, which emphasised portfolio management and strategic business units. Instead, Prahalad argued that companies should view themselves as portfolios of core competencies-the collective learning and coordination of diverse production skills and technologies-rather than collections of discrete business units.

Prahalad’s framework directly underpins the right to win concept. He demonstrated that sustainable competitive advantage emerges not from owning assets but from developing distinctive capabilities that competitors cannot easily replicate. His research showed that companies like Sony, Honda, and 3M succeeded not because they possessed superior resources but because they had cultivated unique organisational capabilities in areas such as miniaturisation, engine design, or innovation processes. These capabilities enabled them to enter adjacent markets and create new products that competitors struggled to match.

Beyond core competence theory, Prahalad later developed the concept of the “bottom of the pyramid,” exploring how companies could create right-to-win strategies by serving low-income consumers in emerging markets through innovation and capability leverage. His work emphasised that strategic advantage comes from understanding what your organisation does distinctively well and then systematically building, protecting, and extending those capabilities across markets and customer segments.

Prahalad’s intellectual legacy remains central to contemporary strategic management. His insistence that capabilities-not assets-form the foundation of competitive advantage directly informs how modern organisations approach the right to win. His framework provides the theoretical scaffolding that explains why companies with seemingly fewer resources can outperform better-capitalised competitors: they possess superior, integrated capabilities that create distinctive value. This insight transformed strategic planning from a financial exercise into a capabilities-centred discipline, making Prahalad’s work indispensable to understanding the right to win in contemporary business strategy.

References

1. https://www.pwc.com/mt/en/publications/other/does-your-strategy-give-you-the-right-to-win.html

2. https://multifamilycollective.com/2024/02/strategy-how-do-we-define-our-right-to-win/

3. https://intrico.io/interview-best-practices/right-to-win

4. https://www.mckinsey.com/capabilities/growth-marketing-and-sales/our-insights/next-in-growth/adjacent-business-growth-making-the-most-of-your-right-to-win

5. https://www.strategyand.pwc.com/gx/en/unique-solutions/capabilities-driven-strategy/right-to-win-exercise.html

6. https://steemit.com/quality/@hefziba/the-right-to-play-and-the-right-to-win-and-how-to-design-quality-into-a-product

"The 'Right to Win' (RTW) is a company's unique, sustainable ability to succeed in a specific market by leveraging superior capabilities, products, and a differentiated 'way to play' that outperform competitors, giving them a better-than-even chance of creating value and growth." - Term: Right to Win

read more
Quote: Clayton Christensen

Quote: Clayton Christensen

“What’s important is to get out there and try stuff until you learn where your talents, interests, and priorities begin to pay off. When you find out what really works for you, then it’s time to flip from an emergent strategy to a deliberate one.” – Clayton Christensen – Author

This profound advice from Clayton Christensen encapsulates a timeless principle for personal and professional growth: the value of experimentation followed by focused commitment. Drawn from his bestselling book How Will You Measure Your Life?, the quote urges individuals to embrace trial and error in discovering their true strengths before committing to a structured path. Christensen, a renowned Harvard Business School professor, applies business strategy concepts to life’s big questions, advocating for an initial phase of exploration – termed ’emergent strategy’ – before shifting to a ‘deliberate strategy’ once clarity emerges.1,7

Who Was Clayton Christensen?

Clayton Magleby Christensen (1947-2020) was a Danish-American academic, author, and business consultant whose ideas reshaped management theory. Born in Salt Lake City, Utah, he earned a bachelor’s degree in economics from Brigham Young University, an MBA from Harvard, and a DBA from Harvard Business School. Christensen joined the Harvard faculty in 1992, where he taught for nearly three decades, influencing generations of leaders.1,5

His seminal work, The Innovator’s Dilemma (1997), introduced the theory of disruptive innovation, explaining how established companies fail by focusing on sustaining innovations for current customers while overlooking simpler, cheaper alternatives that disrupt markets from below. This concept has been applied to industries from technology to healthcare, predicting successes like Netflix over Blockbuster. Christensen authored over a dozen books, including The Innovator’s Solution and How Will You Measure Your Life? (2010, co-authored with James Allworth and Karen Dillon), which blends business insights with personal reflections drawn from his Mormon faith, family life, and battle with leukemia.5,6,7

In How Will You Measure Your Life?, Christensen draws parallels between corporate pitfalls and personal missteps, warning against prioritising short-term gains over long-term fulfilment. The quoted passage appears in a chapter on career strategy, using emergent and deliberate strategies as metaphors for navigating life’s uncertainties.7

Context of the Quote: Emergent vs Deliberate Strategy

Christensen distinguishes two strategic approaches, rooted in his research on successful companies. A deliberate strategy stems from conscious planning, data analysis, and long-term goals – ideal for stable, mature organisations like Procter & Gamble, which refines products based on market data.1 It requires alignment across teams, where every member understands their role in the bigger picture. However, it risks blindness to peripheral opportunities, as rigid focus on the original plan can miss disruptions.1,2

Conversely, an emergent strategy arises organically from bottom-up initiatives, experiments, and adaptations – common in startups like early Walmart, which pivoted from small-town stores after unplanned successes. Christensen notes that over 90% of thriving new businesses succeed not through initial plans but by iterating on emergent learnings, retaining resources to pivot when needed.1,5,6

The quote applies this duality to personal development: start with emergent exploration – trying diverse roles, hobbies, and pursuits – to uncover what aligns talents, interests, and priorities. Once viable paths emerge, switch to deliberate focus for sustained progress. This mirrors Honda’s accidental US motorcycle success, where employees’ side experiments trumped the formal plan.6

Leading Theorists on Emergent and Deliberate Strategy

Christensen built on foundational work by Henry Mintzberg, a Canadian management scholar. In his 1987 paper ‘Crafting Strategy’ and book Strategy Safari, Mintzberg challenged top-down planning, arguing strategies often emerge from patterns in daily actions rather than deliberate designs. He identified strategy as a ‘continuous, diverse, and unruly process’, blending deliberate intent with emergent flexibility – ideas Christensen explicitly referenced.2

  • Henry Mintzberg: Pioneered the emergent strategy concept in the 1970s-80s, critiquing rigid corporate planning. His ’10 Schools of Strategy’ framework contrasts design (deliberate) with learning (emergent) schools.2
  • Michael Porter: Christensen’s contemporary at Harvard, Porter championed deliberate competitive strategy via frameworks like the Five Forces and value chain (1980s). While Porter focused on positioning for advantage, Christensen highlighted how such strategies falter against disruption.1
  • Robert Burgelman: Stanford professor whose research on ‘intraorganisational ecology’ influenced Christensen, showing how autonomous units drive emergent strategies within firms like Intel.5

These theorists collectively underscore strategy’s dual nature: deliberate for execution, emergent for innovation. Christensen uniquely extended this to personal life, making abstract theory accessible for leadership, coaching, and self-management.3,4

Christensen’s insights remain vital for leaders balancing adaptability with purpose, reminding us that true success – in business or life – demands knowing when to explore and when to commit.

References

1. https://online.hbs.edu/blog/post/emergent-vs-deliberate-strategy

2. https://onlydeadfish.co.uk/2014/08/28/emergent-and-deliberate-strategy/

3. https://blog.passle.net/post/102fytx/clayton-christensen-how-to-enjoy-business-and-life-more

4. https://www.azquotes.com/quote/1410310

5. https://www.goodreads.com/work/quotes/138639-the-innovator-s-solution-creating-and-sustaining-successful-growth

6. https://www.businessinsider.com/clay-christensen-theories-in-how-will-you-measure-your-life-2012-7

7. https://www.goodreads.com/author/quotes/1792.Clayton_M_Christensen?page=17

8. https://www.azquotes.com/author/2851-Clayton_Christensen/tag/strategy

9. https://www.mstone.dev/values-how-will-you-measure-your-life/

“What’s important is to get out there and try stuff until you learn where your talents, interests, and priorities begin to pay off. When you find out what really works for you, then it’s time to flip from an emergent strategy to a deliberate one.” - Quote: Clayton Christensen

read more
Quote: Jamie Dimon – JP Morgan Chase CEO

Quote: Jamie Dimon – JP Morgan Chase CEO

“I think the harder thing to measure has always been tech projects. That’s been true my whole life. It’s also been true my whole life, the tech is what changes everything, like everything.” – Jamie Dimon – JP Morgan Chase CEO

Jamie Dimon’s candid observation captures a fundamental tension at the heart of modern business strategy: the profound impact of technology juxtaposed against the persistent challenge of measuring its value. Delivered during JPMorgan Chase’s 2026 Investor Day on 24 February, this remark came amid revelations of the bank’s unprecedented $19.8 billion technology budget – a 10% increase from 2025, with significant allocations to artificial intelligence (AI) projects.1,2,4 As CEO of the world’s largest bank by market capitalisation, Dimon’s perspective is shaped by decades of navigating technological shifts, from the rise of digital banking to the current AI boom.

Jamie Dimon’s Career and Leadership at JPMorgan Chase

Born in 1956 in New York City to Greek immigrant parents, Jamie Dimon began his career in finance at American Express in the 1980s, rising rapidly under the mentorship of Sandy Weill. He co-led the merger that created Citigroup in 1998 but parted ways acrimoniously in 2000. Dimon then transformed Bank One from near-collapse into a powerhouse, earning a reputation as a crisis manager. In 2004, he became CEO of JPMorgan Chase following its acquisition of Bank One, a role he has held for over two decades.3

Under Dimon’s stewardship, JPMorgan has become a technology leader in banking. The firm employs over 300,000 people, with tens of thousands in tech roles, and invests billions annually in innovation. Dimon has long championed tech as a competitive moat, famously urging investors to ‘trust him’ on spending despite vague ROI metrics. In 2026, this commitment manifests in a tech budget swelled by $2 billion, driven by AI for customer service, personalised insights, and developer tools, amid rising hardware costs from AI chip demand.1,5 Dimon predicts JPMorgan will be a ‘winner’ in the AI race, leveraging its data assets and No. 1 ranking in AI maturity among banks.1,3

Context of the Quote: JPMorgan’s 2026 Strategic Framework

The quote emerged in a Q&A at the 24 February 2026 event, responding to analyst pressure on tech ROI. CFO Jeremy Barnum highlighted technology as a major expense driver, up $9 billion overall, with $1.2 billion in investments including AI. Dimon acknowledged time savings from tech as ‘too vague’ to measure precisely, echoing lifelong observations from mainframes to cloud computing.1,2 This aligns with broader warnings: AI will revolutionise operations but displace jobs, necessitating societal preparation like retraining and phased adoption to avoid shocks, such as mass unemployment from autonomous trucks.4

JPMorgan is aggressively deploying AI – its large language model serves 150,000 users weekly – while planning ‘huge redeployment’ for affected staff. Executives like Marianne Lake stress paranoia in competition, quoting ‘Only the paranoid survive’. Rivals like Bank of America ($14 billion tech spend) underscore the sector-wide arms race.1

Leading Theorists on Technology Measurement and Impact

Dimon’s views resonate with seminal thinkers on technology’s intangible returns. Peter Drucker, the father of modern management, argued in The Practice of Management (1954) that knowledge workers’ output defies traditional metrics, prefiguring tech’s measurement woes. He coined ‘knowledge economy’, emphasising innovation’s long-term value over short-term quantification.[/latex]

Erik Brynjolfsson and Andrew McAfee, MIT economists, explore this in The Second Machine Age (2014), detailing how digital technologies yield ‘non-rival’ benefits – exponential productivity without proportional costs – hard to capture in GDP or ROI. Their ‘bounty vs. spread’ framework warns of uneven gains, mirroring Dimon’s job displacement concerns.4

Clayton Christensen’s The Innovator’s Dilemma (1997) explains why incumbents struggle with disruptive tech: metrics favour sustaining innovations, blinding firms to transformative ones. JPMorgan’s shift from infrastructure modernisation to AI-ready data exemplifies overcoming this.5

In AI specifically, Nick Bostrom’s Superintelligence (2014) and Stuart Russell’s Human Compatible (2019) address measurement beyond finance – aligning superintelligent systems with human values amid unpredictable impacts. Dimon’s pragmatic focus on phased integration echoes calls for cautious deployment.4

These theorists underscore Dimon’s point: technology’s true worth lies in reshaping ‘everything’, demanding faith in leadership over precise yardsticks. JPMorgan’s strategy embodies this, positioning the bank at the vanguard of finance’s technological frontier.

References

1. https://www.businessinsider.com/jpmorgan-tech-budget-ai-20-billion-jamie-dimon-2026-2

2. https://www.aol.com/articles/jpmorgan-spend-almost-20-billion-000403027.html

3. https://www.benzinga.com/markets/large-cap/26/02/50808191/jamie-dimon-predicts-jpmorgan-will-be-a-winner-in-ai-race-boosts-2026-tech-spend-to-nearly-20-billion

4. https://fortune.com/2026/02/25/jamie-dimon-society-prepare-ai-job-displacement/

5. https://finviz.com/news/321869/how-to-play-jpm-stock-as-tech-spend-ramps-in-2026-amid-ai-uncertainty

6. https://fintechmagazine.com/news/inside-jpmorgans-2026-stock-market-hopes-and-new-london-hq

"I think the harder thing to measure has always been tech projects. That's been true my whole life. It's also been true my whole life, the tech is what changes everything, like everything." - Quote: Jamie Dimon - JP Morgan Chase CEO

read more
Term: World model

Term: World model

“A world model is defined as a learned neural representation that simulates the dynamics of an environment, enabling an AI agent to predict future states and reason about the consequences of its actions.” – World model

A **world model** is an internal representation of the environment that an AI system creates to simulate the external world within itself. This learned neural representation enables an AI agent to predict future states, simulate the consequences of different actions before executing them in the real world, and reason about causal relationships, much like the human brain does when planning activities.1,3,6

At its core, a world model comprises key components:

  • Transition model: Predicts how the environment’s state changes based on the agent’s actions, such as a robot displacing an object by moving its hand.1
  • Observation model: Determines what the agent observes in each state, incorporating data from sensors, cameras, and other inputs.1
  • Reward model: In reinforcement learning contexts, forecasts rewards or penalties from actions in specific states.1

Unlike traditional machine learning, which maps inputs directly to outputs, world models foster a general understanding of environmental dynamics, enhancing performance in novel situations.1,4

Key Capabilities and Advantages

World models empower AI with:

  • Causality understanding: Grasping why events occur, beyond mere statistical correlations seen in large language models (LLMs) like GPT.1,2
  • Planning and reasoning: Simulating scenarios internally to select optimal actions, akin to chain-of-thought reasoning.1,3
  • Efficient learning: Requiring fewer examples, similar to a child grasping gravity after minimal observations.1
  • Transfer learning and generalisation: Applying knowledge across domains, such as adapting object manipulation skills.1
  • Intuitive physics: Comprehending basic physical principles, essential for real-world interaction.1,4

Trained on diverse data like videos, photos, audio, and text, world models provide richer grounding in reality than LLMs, which focus on text patterns.2,4,6

Role in Achieving Artificial General Intelligence (AGI)

Prominent figures like Yann LeCun (Meta), Demis Hassabis (Google DeepMind), and Yoshua Bengio (Mila) view world models as crucial for AGI, enabling safe, scientific, and intelligent systems that plan ahead and simulate outcomes.3 Recent advancements, such as DeepMind’s Genie 3 (August 2025), generate diverse 3D environments from text prompts, simulating realistic physics for AI training.1 Runway’s GWM-1 further advances general-purpose simulation for robotics and discovery.5

Best Related Strategy Theorist: Yann LeCun

**Yann LeCun**, Chief AI Scientist at Meta and a pioneer of convolutional neural networks (CNNs), is the foremost theorist championing world models as foundational for intelligent AI. LeCun describes them as internal predictive models that simulate real-world dynamics, incorporating modules for perception, prediction, cost/reward evaluation, and planning. This allows AI to ‘imagine’ action consequences, vital for robotics, autonomous vehicles, and AGI.2,3

Born in 1960 in France, LeCun earned his PhD in 1987 from Universite Pierre et Marie Curie, Paris, under supervision of Yves Le Cun (no relation). He popularised CNNs in the 1980s-1990s for handwriting recognition, co-founding the field of deep learning. Joining New York University as a professor in 2003, he co-directed the NYU Center for Data Science. In 2013, he became Meta’s first AI head, driving open-source initiatives like PyTorch.

LeCun’s advocacy for world models stems from his critique of LLMs’ limitations in causal reasoning and physical simulation. He argues they enable ‘objective-driven AI’ with energy-based models for planning, positioning world models as the path beyond pattern-matching to human-like intelligence. A Turing Award winner (2018) with Bengio and Hinton, LeCun’s vision influences labs worldwide, emphasising world models for safe, efficient real-world AI.2,3

References

1. https://deepfa.ir/en/blog/world-model-ai-agi-future

2. https://www.youtube.com/watch?v=qulPOUiz-08

3. https://www.quantamagazine.org/world-models-an-old-idea-in-ai-mount-a-comeback-20250902/

4. https://www.turingpost.com/p/topic-35-what-are-world-models

5. https://runwayml.com/research/introducing-runway-gwm-1

6. https://techcrunch.com/2024/12/14/what-are-ai-world-models-and-why-do-they-matter/

"A world model is defined as a learned neural representation that simulates the dynamics of an environment, enabling an AI agent to predict future states and reason about the consequences of its actions." - Term: World model

read more
Term: AI Data Centre

Term: AI Data Centre

“An AI Data Center is a highly specialized, power-dense physical facility designed specifically to train, deploy, and run artificial intelligence (AI) models, machine learning (ML) algorithms, and generative AI applications.” – AI Data Centre

This specialised facility diverges significantly from traditional data centres, which handle mixed enterprise workloads, by prioritising accelerated compute, ultra-high-bandwidth networking, and advanced power and cooling systems to manage dense GPU clusters and continuous data pipelines for AI tasks like model training, fine-tuning, and inference.1,2,4

Central to its operation are high-performance computing resources such as Graphics Processing Units (GPUs) and Tensor Processing Units (TPUs). GPUs excel in parallel processing, enabling rapid handling of billions of data points essential for AI model training, while TPUs offer tailored efficiency for AI-specific tasks, reducing energy consumption.2,3,5

High-speed networking is critical, employing technologies like InfiniBand, 400 Gbps Ethernet, and optical interconnects to facilitate seamless data movement across thousands of servers, preventing bottlenecks in distributed AI workloads.2,4

Robust storage systems-including distributed file systems and object storage-ensure swift access to vast datasets, model weights, and real-time inference data, with scalability to accommodate ever-growing AI requirements.1,2,3

Addressing the immense power density, advanced cooling systems are vital, often accounting for 35-40% of energy use, incorporating liquid cooling and thermal zoning to maintain efficiency and low Power Usage Effectiveness (PUE) for sustainability.2,4

Additional features include data centre automation, network security, and energy-efficient designs, yielding benefits like enhanced performance, scalability, cost optimisation, and support for innovation in fields such as big data analytics, natural language processing, and computer vision.3,5

Key Theorist: Jensen Huang and the GPU Revolution

The foremost strategist linked to the evolution of AI data centres is Jensen Huang, co-founder, president, and CEO of NVIDIA Corporation. Huang’s vision has positioned NVIDIA’s GPUs as the cornerstone of modern AI infrastructure, directly shaping the architecture of these power-dense facilities.2

Born in 1963 in Taiwan, Huang immigrated to the United States as a child. He earned a bachelor’s degree in electrical engineering from Oregon State University and a master’s from Stanford University. In 1993, at age 30, he co-founded NVIDIA with Chris Malachowsky and Curtis Priem, initially targeting 3D graphics for gaming and PCs. Huang recognised the parallel processing power of GPUs, pivoting NVIDIA towards general-purpose computing on GPUs (CUDA platform, launched 2006), which unlocked their potential for scientific simulations, cryptography, and eventually AI.2

Huang’s prescient relationship to AI data centres stems from his early advocacy for GPU-accelerated computing in machine learning. By 2012, Alex Krizhevsky’s use of NVIDIA GPUs to win the ImageNet competition catalysed the deep learning boom, proving GPUs’ superiority over CPUs for neural networks. Under Huang’s leadership, NVIDIA developed AI-specific hardware like A100 and H100 GPUs, Blackwell architecture, and full-stack solutions including InfiniBand networking via Mellanox (acquired 2020). These innovations address AI data centre challenges: massive parallelism for training trillion-parameter models, high-bandwidth interconnects for multi-node scaling, and power-efficient designs for dense racks consuming up to 100kW each.2,4

Huang’s biography reflects relentless innovation; he famously wore a black leather jacket onstage, symbolising his contrarian style. NVIDIA’s market cap surged from $3 billion in 2015 to over $3 trillion by 2024, propelled by AI demand. His strategic foresight-declaring in 2017 that “the era of AI has begun”-anticipated the hyperscale AI data centre boom, making NVIDIA indispensable to leaders like Microsoft, Google, and Meta. Huang’s influence extends to sustainability, pushing for efficient cooling and low-PUE designs amid AI’s energy demands.4

Today, virtually every major AI data centre relies on NVIDIA technology, underscoring Huang’s role as the architect of the AI infrastructure revolution.

References

1. https://www.aflhyperscale.com/articles/ai-data-center-infrastructure-essentials/

2. https://www.rcrwireless.com/20250407/fundamentals/ai-optimized-data-center

3. https://www.racksolutions.com/news/blog/what-is-an-ai-data-center/

4. https://www.f5.com/glossary/ai-data-center

5. https://www.lenovo.com/us/en/glossary/what-is-ai-data-center/

6. https://www.ibm.com/think/topics/ai-data-center

7. https://www.generativevalue.com/p/a-primer-on-ai-data-centers

8. https://www.sunbirddcim.com/glossary/data-center-components

"An AI Data Center is a highly specialized, power-dense physical facility designed specifically to train, deploy, and run artificial intelligence (AI) models, machine learning (ML) algorithms, and generative AI applications." - Term: AI Data Centre

read more
Quote: Clayton Christensen

Quote: Clayton Christensen

“Culture is a way of working together toward common goals that have been followed so frequently and so successfully that people don’t even think about trying to do things another way. If a culture has formed, people will autonomously do what they need to do to be successful.” – Clayton Christensen – Author

Clayton M. Christensen, the renowned Harvard Business School professor and author, offers a piercing definition of culture that underscores its invisible yet commanding influence on human behaviour. Drawn from his seminal 2010 book How Will You Measure Your Life?, this observation emerges from Christensen’s broader exploration of how personal and professional success hinges on aligning daily actions with enduring principles.1,2 The book, blending business acumen with life lessons, distils decades of research into practical wisdom for leaders, managers, and individuals navigating career and family demands.1,3

Christensen’s Life and Intellectual Journey

Born in 1952 in Salt Lake City, Utah, Christensen rose from humble roots to become one of the most influential thinkers in business strategy. A devout Mormon, he integrated faith with rigorous analysis, viewing truth in science and religion as harmonious.2,4 Educated at Brigham Young University, Oxford as a Rhodes Scholar, and Harvard Business School, he joined Harvard’s faculty in 1989. His breakthrough came with The Innovator’s Dilemma (1997), introducing disruptive innovation – the theory explaining how market-leading firms falter by ignoring low-end or new-market disruptions.5 This framework, applied across industries from steel to smartphones, earned him global acclaim and advisory roles with Intel, Kodak, and others.

Christensen’s later works, including How Will You Measure Your Life?, shift from corporate strategy to personal integrity. Co-authored with Jeff Dyer and Hal Gregersen, it warns against marginal compromises – ‘just this once’ temptations – that erode character over time.3 He argued management is ‘the most noble of professions’ when it fosters growth, motivation, and ethical behaviour.2,3 Stricken with leukemia in 2017 and passing in 2020, Christensen left a legacy of over 150,000 citations and millions of books sold, emphasising that true metrics of life lie in helping others become better people.2,4

The Context of the Quote in Christensen’s Philosophy

In How Will You Measure Your Life?, the quote illuminates how organisations – and lives – succeed through ingrained habits. Christensen posits that culture forms when proven paths to common goals become automatic, enabling autonomous action without constant oversight.1 This ties to his ‘resources, processes, priorities’ (RPP) framework: resources fuel action, processes habitualise it, and priorities direct it.2,4 A strong culture aligns these, creating ‘seamless webs of deserved trust’ that propel success, echoing his warnings against short-termism where leaders chase loud demands over lasting value.3

He contrasts virtuous cultures fostering positive-sum interactions and lucky breaks with toxic ones breeding zero-sum games and isolation.3 For leaders, cultivating culture means framing work to motivators – purpose, progress, relationships – so employees end days fulfilled, much like Christensen’s own ‘good day’ model.2

Leading Theorists on Organisational Culture

Christensen’s views build on foundational theorists who dissected culture’s role in management and leadership.

  • Edgar Schein (1935-2023): In Organizational Culture and Leadership (1985), Schein defined culture as ‘a pattern of shared basic assumptions’ learned through success, mirroring Christensen’s ‘frequently and successfully followed’ paths. Schein’s levels – artefacts, espoused values, basic assumptions – explain why entrenched cultures resist change, much like Christensen’s processes becoming ‘crushing liabilities’.5
  • Charles Handy (1932-2024): The Irish management guru’s Understanding Organizations (1976) classified cultures (power, role, task, person), influencing Christensen’s emphasis on autonomous success. Handy’s gods of management archetype underscores culture’s ritualistic hold.
  • Stephen Covey (1932-2012): In The 7 Habits of Highly Effective People (1989), Covey urged ‘keeping the main thing the main thing’ via principle-centred leadership, aligning with Christensen’s priorities and family-career balance.3
  • Peter Drucker (1909-2005): The ‘father of modern management’ declared ‘culture eats strategy for breakfast’, a maxim Christensen echoed by prioritising cultural processes over mere resources.5
  • Charles Munger (1924-2023): Berkshire Hathaway’s vice chairman complemented Christensen, praising ‘the right culture’ as a ‘seamless web of deserved trust’ enabling weak ties and serendipity.3

These thinkers collectively affirm culture as the bedrock of sustained performance, where unconscious alignment trumps enforced compliance. Christensen’s insight, rooted in their legacy, equips leaders to build environments where success feels inevitable.

References

1. https://www.goodreads.com/quotes/7256080-culture-is-a-way-of-working-together-toward-common-goals

2. https://www.toolshero.com/toolsheroes/clayton-christensen/

3. https://www.skmurphy.com/blog/2020/02/16/clayton-christensen-on-how-will-you-measure-your-life/

4. https://quotefancy.com/clayton-m-christensen-quotes/page/2

5. https://www.azquotes.com/author/2851-Clayton_Christensen

6. https://memories.lifeweb360.com/clayton-christensen/a0d52888-de6d-4246-bce9-26d9aaee0aac

“Culture is a way of working together toward common goals that have been followed so frequently and so successfully that people don’t even think about trying to do things another way. If a culture has formed, people will autonomously do what they need to do to be successful.” - Quote: Clayton Christensen

read more
Quote: Jeremy Barnum – Executive VP and CFO of JP Morgan Chase

Quote: Jeremy Barnum – Executive VP and CFO of JP Morgan Chase

“We’re growing. We’re onboarding new clients. In many cases, I’m looking at some of my colleagues on the corporate and investment bank, the growth in new clients comes with lending. That lending is relatively low returning then you eventually get other business. So yes, that’s an example of an investment today that as it matures, has higher returns.” – Jeremy Barnum – Executive VP & CFO of JP Morgan Chase

Jeremy Barnum, Executive Vice President and Chief Financial Officer of JPMorgan Chase, shared this perspective during a strategic framework and firm overview executive Q&A on 24 February 2026. His remarks underscore a core tenet of modern banking: initial client acquisition often demands upfront investments in low-margin activities like lending, which pave the way for higher-return opportunities as relationships mature.[SOURCE]

Barnum’s career trajectory exemplifies the blend of analytical rigour and strategic foresight essential for leading one of the world’s largest financial institutions. Joining JPMorgan Chase in 2007 as a managing director in treasury and risk management, he ascended rapidly through roles in investor relations and corporate development. By 2021, he was appointed CFO, succeeding Jennifer Piepszak, who transitioned to co-CEO of the commercial and investment bank. Under Barnum’s stewardship, JPMorgan has navigated volatile markets, including the acquisition of Goldman Sachs’ Apple Card portfolio, which contributed to a $2.2 billion pre-tax credit reserve build in Q4 2025, even as net income reached $13 billion and revenue climbed 7% to $46.8 billion.1

In the broader context of this quote, Barnum was addressing investor concerns about growth dynamics in the corporate and investment banking (CIB) division. New client onboarding frequently begins with lending – a relatively low-return activity due to compressed margins and credit risks – but evolves into a fuller ecosystem of services, including advisory, trading, and capital markets activities that deliver superior profitability over time. This ‘investment today for returns tomorrow’ model aligns with JPMorgan’s 2026 expense projections of $105 billion, driven by ‘structural optimism’ and the imperative to invest in technology, AI, and competitive positioning against fintech challengers like Revolut and SoFi, as well as traditional rivals like Charles Schwab.1

The discussion occurred against a backdrop of heightened competitive and regulatory pressures. Just weeks earlier, in January 2026, Barnum warned of the perils of President Donald Trump’s proposed 10% cap on credit card interest rates, arguing it would curtail credit access for higher-risk borrowers – ‘the people who need it the most’ – and force lenders to scale back operations in a fiercely competitive landscape.2,3 Consumer and community banking revenue rose 6% year-over-year to $19.4 billion, bolstered by 7% growth in card services, yet such policies threaten this momentum. JPMorgan’s tech budget is set to surge by $2 billion to $19.8 billion in 2026, emphasising investments to maintain primacy.5

Leading theorists on relationship banking and client lifecycle management provide intellectual foundations for Barnum’s approach. Jay R. Ritter, a pioneer in IPO and capital-raising research at the University of Florida, has long documented how initial public offerings often underperform short-term but enable firms to access deeper capital markets over time – a parallel to banking’s lending-to-ecosystem progression. Similarly, Arnoud W.A. Boot, a professor at the University of Amsterdam and ECB Shadow Monetary Policy Committee member, theorises in works like ‘Relationship Banking and the Death of the Middleman’ (2000) that banks derive sustained value from ‘household-specific’ information built through ongoing relationships, transforming low-margin entry points into high-return sticky business.

Robert M. Townsend, Caltech economist and Nobel laureate (2011, with Finn Kydland), extends this through his incomplete contracting models, showing how banks mitigate asymmetric information via repeated interactions, justifying upfront lending as a commitment device for future profitability. More contemporarily, Viral V. Acharya of NYU Stern emphasises in IMF and BIS papers the ‘credit ecosystem’ where initial low-yield loans signal credibility, unlocking cross-selling in a post-2008 regulatory environment marked by Basel III capital constraints. These frameworks validate JPMorgan’s strategy: lending as the ‘hook’ in a maturing client portfolio amid rising competition and policy risks.

Barnum’s comments, delivered mere hours before this analysis (on 25 February 2026), reflect real-time strategic clarity. As JPMorgan projects resilience in consumer and small business segments, this philosophy positions the firm to convert today’s investments into enduring leadership.1,4

References

1. https://fortune.com/2026/01/14/jpmorgan-ceo-cfo-staying-competitive-requires-investment/

2. https://www.businessinsider.com/jpmorgan-warning-on-credit-card-cap-interest-2026-1

3. https://neworleanscitybusiness.com/blog/2026/01/13/jpmorgan-credit-card-rate-cap-warning/

4. https://www.marketscreener.com/news/jpmorgan-cfo-jeremy-barnum-speaks-at-investor-update-ce7e5dd3db8ff425

5. https://www.aol.com/news/jpmorgan-spend-almost-20-billion-000403027.html

"We're growing. We're onboarding new clients. In many cases, I'm looking at some of my colleagues on the corporate and investment bank, the growth in new clients comes with lending. That lending is relatively low returning then you eventually get other business. So yes, that's an example of an investment today that as it matures, has higher returns." - Quote: Jeremy Barnum - Executive VP & CFO of JP Morgan Chase

read more
Term: Edge devices

Term: Edge devices

“Edge devices are physical computing devices located at the ‘edge. of a network, close to where data is generated or consumed, that run AI algorithms and models locally rather than relying exclusively on a centralised cloud or data center.” – Edge devices

Edge devices integrate edge computing with artificial intelligence, enabling real-time data processing on interconnected hardware such as sensors, Internet of Things (IoT) devices, smartphones, cameras, and industrial equipment. This local execution reduces latency to milliseconds, enhances privacy by retaining data on-device, and alleviates network bandwidth strain from constant cloud transmission.1,4,5

Unlike traditional cloud-based AI, where data travels to remote servers for computation, edge devices perform tasks like predictive analytics, anomaly detection, speech recognition, and machine vision directly at the source. This supports applications in autonomous vehicles, smart factories, healthcare monitoring, retail systems, and wearable technology.2,3,6

Key Characteristics and Benefits

  • Low Latency: Processes data in real time without cloud round-trips, critical for time-sensitive scenarios like defect detection in manufacturing.3,4
  • Bandwidth Efficiency: Reduces data transfer volumes by analysing locally and sending only aggregated insights to the cloud.1,5
  • Enhanced Privacy and Security: Keeps sensitive data on-device, mitigating breach risks during transmission.5,6
  • Offline Capability: Operates without constant internet connectivity, ideal for remote or unreliable networks.6,8

Best Related Strategy Theorist: Dr. Andrew Chi-Chih Yao

Dr. Andrew Chi-Chih Yao, a pioneering computer scientist, stands as the most relevant strategy theorist linked to edge devices through his foundational contributions to distributed computing and efficient algorithms, which underpin modern edge AI architectures. Born in Shanghai, China, in 1946, Yao earned his PhD from Harvard University in 1972 under advisor Patrick C. Fischer. He held faculty positions at MIT, Princeton, and Stanford before joining Tsinghua University in 2004 as Director of the Institute for Interdisciplinary Information Sciences (IIIS), dubbed the ‘Chinese Springboard for talents in computer science’.[external knowledge basis]

Yao’s relationship to edge devices stems from his seminal work on parallel and distributed algorithms, including the Yao minimax principle for computational complexity (1970s), which optimises resource allocation in decentralised systems-directly analogous to edge computing’s local processing paradigm. His PRAM (Parallel Random Access Machine) model formalised efficient parallelism on resource-constrained devices, influencing how AI models are deployed on edge hardware with limited power and compute.[external knowledge basis] Notably, Yao’s research on communication complexity minimises data exchange between nodes, mirroring edge devices’ strategy of local inference to cut cloud dependency-a core tenet echoed in edge AI literature.1,7

A Turing Award winner (2000) for contributions to computation theory, Yao’s strategic vision emphasises scalable, efficient computing at the periphery, shaping industries from IoT to AI. His mentorship of talents like Jack Ma (Alibaba founder) further extends his influence on practical deployments of edge technologies in global supply chains.

References

1. https://www.ibm.com/think/topics/edge-ai

2. https://www.micron.com/about/micron-glossary/edge-ai

3. https://zededa.com/glossary/edge-ai-computing/

4. https://www.flexential.com/resources/blog/beginners-guide-ai-edge-computing

5. https://www.splunk.com/en_us/blog/learn/edge-ai.html

6. https://www.f5.com/glossary/what-is-edge-ai

7. https://www.cisco.com/site/us/en/learn/topics/artificial-intelligence/what-is-edge-ai.html

8. https://blogs.nvidia.com/blog/what-is-edge-ai/

"Edge devices are physical computing devices located at the 'edge. of a network, close to where data is generated or consumed, that run AI algorithms and models locally rather than relying exclusively on a centralised cloud or data center." - Term: Edge devices

read more
Quote: David Viscott – Psychiatrist

Quote: David Viscott – Psychiatrist

“The purpose of life is to discover your gift. The work of life is to develop it. The meaning of life is to give your gift away.” – David Viscott – Psychiatrist

David Steven Viscott (1938-1996) was an American psychiatrist whose career fundamentally reshaped how mental health advice reached the general public. Born in Boston and educated at Dartmouth College and Tufts Medical School, Viscott emerged as one of the most influential figures in the history of therapeutic broadcasting, pioneering a distinctive approach to psychological counselling that prioritised speed, clarity and direct confrontation with uncomfortable truths.

The Revolutionary Radio Therapist

In 1980, Viscott made a pivotal decision that would define his legacy: he became one of the first psychiatrists with a medical degree to launch a full-time call-in radio show. Broadcasting from KABC-AM in Los Angeles, he transformed late-night radio into a therapeutic space where thousands of listeners could eavesdrop on-and learn from-the real struggles of callers seeking guidance. From 1980 until April 1993, Viscott became what his business partner Matt Small described as “everyone’s drive-time friend for years,” diagnosing callers’ emotional difficulties within minutes of hearing their problems and dispensing what became known as “tough love” therapy.

What distinguished Viscott from his contemporaries was his methodical approach. He called his technique the “Viscott Method,” a framework built on three foundational pillars: speed, simplicity and relentless pursuit of truth. Viscott held an unshakeable conviction that without confronting reality head-on, no individual could adequately address their underlying difficulties. This philosophy wasn’t merely rhetorical-it was operationalised through his therapeutic centres. In 1984, he established the Viscott Institute, which expanded into a chain of three Viscott Centers for Natural Therapy across Southern California, where trained therapists applied his methods in short-term interventions. The model was radical for its time: four sessions maximum, and clients departed with cassette recordings of their therapy and workbooks designed to facilitate self-discovery.

The Philosophy of Purpose and Gift

The quote attributed to Viscott-“The purpose of life is to discover your gift. The work of life is to develop it. The meaning of life is to give your gift away”-encapsulates the philosophical core of his therapeutic vision. This formulation appeared in his 1993 work Finding Your Strength in Difficult Times, a text that synthesised decades of clinical observation and radio counselling into actionable wisdom for readers navigating personal crises.

Viscott’s tripartite framework reflects a humanistic psychology tradition that emphasises self-actualisation and purposeful living. The concept of discovering one’s “gift”-one’s unique capacities and reason for existing-became central to his therapeutic brand. He believed that psychological distress often stemmed from individuals failing to recognise or develop their inherent talents, and that genuine healing required not merely symptom relief but existential clarity. The progression from discovery to development to generosity represents a maturation of consciousness: from self-awareness through disciplined growth to transcendent contribution.

This philosophy resonated powerfully with 1980s and 1990s audiences seeking meaning beyond material accumulation. Viscott positioned psychological work as inseparable from spiritual purpose, offering listeners a secular yet profound answer to questions of meaning that had traditionally belonged to religious or philosophical domains.

Intellectual Lineage and Theoretical Context

Viscott’s thinking emerged from and contributed to several significant currents in twentieth-century psychology and psychiatry. His emphasis on rapid diagnosis and direct intervention reflected the influence of brief therapy models that gained prominence in the 1960s and 1970s, particularly the work of Albert Ellis and his Rational Emotive Behaviour Therapy (REBT), which similarly prioritised identifying core beliefs and challenging them directly.

The humanistic psychology movement, championed by figures such as Carl Rogers and Abraham Maslow, profoundly shaped Viscott’s conception of the therapeutic relationship and human potential. Maslow’s hierarchy of needs and his concept of self-actualisation-the realisation of one’s full potential-provided theoretical scaffolding for Viscott’s insistence that discovering and developing one’s gift represented not a luxury but a psychological necessity. Where Maslow theorised that self-actualisation was the pinnacle of human motivation, Viscott operationalised this insight through accessible therapeutic techniques and media platforms.

Viscott also drew from existential psychology, particularly the work of Viktor Frankl, whose Man’s Search for Meaning (1946) argued that the primary human motivation was the search for meaning rather than pleasure or power. Frankl’s assertion that individuals could find purpose even in suffering aligned closely with Viscott’s therapeutic stance. The notion that meaning emerges through contribution-through “giving your gift away”-echoes Frankl’s emphasis on transcendence through service and creative expression.

Additionally, Viscott’s work reflected the broader cultural moment of the 1970s and 1980s, when self-help literature and therapeutic culture began permeating mainstream consciousness. Psychologist Joyce Brothers had pioneered radio psychology in the 1950s, discussing previously taboo topics such as sexual dysfunction. However, it was psychologist Toni Grant who, in the 1970s, revolutionised the format by taking live calls on air in Los Angeles-a model Viscott adopted and refined. Viscott’s innovation was to combine psychiatric training with McDonald’s-like efficiency, creating a scalable therapeutic model that democratised access to professional psychological guidance.

The Author and His Works

Viscott’s prolific authorship complemented his broadcasting career. His autobiography, The Making of a Psychiatrist (1973), became a bestseller, earned selection as a Book of the Month Club Main Selection, and received nomination for the Pulitzer Prize. The work offered readers an intimate account of psychiatric training whilst questioning professional orthodoxies-a dual achievement that established Viscott as both insider and critic of his discipline.

His subsequent publications-including The Language of Feelings (1975), Risking (1976), I Love You, Let’s Work It Out, The Viscott Method, and Emotional Resilience (1993)-consistently emphasised self-examination, emotional literacy and purposeful living. These works translated his radio methodology into literary form, allowing readers to apply his techniques independently. Finding Your Strength in Difficult Times (1993), which contains the gift-centred philosophy quoted above, represented a culmination of his thinking, offering guidance for individuals confronting life’s most challenging moments.

Legacy and Paradox

Viscott’s career embodied a profound paradox. The psychiatrist who authored Emotional Resilience and built a therapeutic empire around rapid problem-solving proved unable to resolve his own deepest difficulties. He died in October 1996, alone and financially depleted, apparently from heart disease. Friends and colleagues noted that despite his public confidence and therapeutic acumen, Viscott struggled with significant personal insecurities rooted in childhood experiences-his father’s emotional distance, anxieties about his physical appearance and stature, and an ego that, whilst driving his professional ambitions, simultaneously alienated those closest to him.

Yet this contradiction does not diminish his contribution. Viscott’s greatest achievement was recognising that psychological healing and personal meaning were not luxuries reserved for the wealthy or the analytically inclined, but fundamental human needs that could be addressed through accessible, direct intervention. His radio shows reached hundreds of thousands of listeners who might never have entered a therapist’s office. His books provided frameworks for self-understanding that transcended clinical jargon. His philosophy-that life’s purpose centres on discovering, developing and sharing one’s unique gifts-offered a secular yet spiritually resonant answer to existential questions that continue to preoccupy contemporary audiences.

The quote itself endures because it captures something essential: the conviction that human flourishing requires not merely the absence of suffering but the active pursuit of purpose, the disciplined cultivation of talent, and the generous contribution of one’s capacities to the world. In an era of increasing psychological fragmentation and meaning-seeking, Viscott’s tripartite formula remains a compelling articulation of what a purposeful life might entail.

References

1. https://en.wikipedia.org/wiki/David_Viscott

2. https://www.dorchesteratheneum.org/project/david-viscott-1938-1996/

3. https://www.latimes.com/archives/la-xpm-1996-10-15-me-54130-story.html

4. https://www.latimes.com/archives/la-xpm-1997-01-26-tm-22135-story.html

5. https://www.goodreads.com/book/show/1215412.The_Making_of_a_Psychiatrist

6. https://books.google.com/books/about/The_Making_of_a_Psychiatrist.html?id=93uZzobqDhwC

7. https://www.thriftbooks.com/w/the-making-of-a-psychiatrist_david-viscott/588808/

"The purpose of life is to discover your gift. The work of life is to develop it. The meaning of life is to give your gift away" - Quote: David Viscott

read more
Quote: Troy Rohrbaugh – Co-CEO of JP Morgan Chase Commercial and Investment Bank

Quote: Troy Rohrbaugh – Co-CEO of JP Morgan Chase Commercial and Investment Bank

“We’re doing a lot of lending. We’re not doing it to develop assets, like that’s not what we do. We’re doing it to be in the ecosystem to create a halo effect with our clients and create velocity in our portfolios.” – Troy Rohrbaugh – Co-CEO of JP Morgan Chase Commercial & Investment Bank

Troy Rohrbaugh’s statement encapsulates a fundamental shift in how leading investment banks approach credit deployment in the modern financial ecosystem. Rather than pursuing direct lending as a standalone profit centre-a strategy that has increasingly exposed competitors to concentration risk and late-cycle credit deterioration-JPMorgan’s Co-CEO of the Commercial & Investment Bank articulates a relationship-centric model that treats lending as a strategic tool for deepening client engagement and accelerating capital velocity across the firm’s broader platform.

The Context: A Decade of Market Evolution

Rohrbaugh’s remarks arrive at a critical inflection point in capital markets. The past decade has witnessed the proliferation of specialised direct lending vehicles, private credit funds, and non-bank lenders that have fundamentally altered the competitive landscape for traditional investment banks. What began as a niche alternative to syndicated lending has evolved into a multi-trillion-pound asset class, with some estimates suggesting global private credit markets now exceed $2 trillion in assets under management.

This expansion has created both opportunity and peril. Whilst direct lending has provided crucial capital to mid-market companies and sponsors during periods of traditional bank retrenchment, it has also incentivised a race-to-the-bottom mentality amongst certain participants. Asset aggregators-firms whose primary objective is to accumulate loans for fee generation rather than client service-have increasingly dominated deal flow, often accepting looser covenants, higher leverage multiples, and weaker documentation standards in pursuit of volume.

JPMorgan’s strategic positioning directly challenges this paradigm. By explicitly rejecting the asset-accumulation model, Rohrbaugh signals that the bank views direct lending not as a destination but as a waypoint within a comprehensive client relationship architecture.

The Strategic Rationale: Ecosystem Integration

The concept of the “halo effect” that Rohrbaugh references deserves particular attention. In organisational behaviour and marketing theory, the halo effect describes the cognitive bias whereby positive impressions in one domain influence perceptions across other domains. Applied to investment banking, this principle suggests that a bank’s willingness to provide flexible, relationship-oriented credit solutions-even at modest spreads-generates disproportionate downstream value through increased advisory mandates, capital markets activity, and treasury services.

This approach reflects a maturation in how sophisticated financial institutions conceptualise competitive advantage. Rather than optimising for individual transaction profitability, JPMorgan is optimising for relationship depth and cross-selling velocity. A client receiving direct lending support during a period when traditional bank credit is constrained develops institutional loyalty that translates into preferred status for subsequent M&A advisory, equity capital markets mandates, and treasury services.

The “velocity in our portfolios” component of Rohrbaugh’s statement refers to the acceleration of capital deployment and redeployment across JPMorgan’s various business lines. By maintaining direct lending capacity, the bank ensures it can respond rapidly to client needs, thereby increasing the frequency and volume of client interactions and transactions.

Theoretical Foundations: Relationship Banking and Stakeholder Capitalism

Rohrbaugh’s philosophy aligns with contemporary academic and practitioner discourse on relationship banking-a model that emphasises long-term client partnerships over transactional efficiency. This approach has deep historical roots in European banking traditions, particularly in Germany and Switzerland, where universal banks have long maintained comprehensive client relationships spanning lending, advisory, and capital markets services.

The intellectual architecture supporting this strategy draws from several theoretical traditions. First, the resource-based view of competitive advantage, articulated by strategist Jay Barney and others, suggests that sustainable competitive advantage derives not from individual transactions but from difficult-to-replicate relationship assets and institutional knowledge. JPMorgan’s direct lending capability, when deployed through a relationship lens, becomes precisely such an asset-difficult for pure-play asset managers to replicate because it requires deep industry expertise, credit judgment, and client intimacy.

Second, stakeholder capitalism theory-increasingly influential amongst institutional investors and regulators-posits that long-term firm value creation requires balancing the interests of multiple stakeholders: clients, employees, shareholders, and communities. By positioning direct lending as a client service rather than a profit centre, JPMorgan implicitly adopts a stakeholder framework that prioritises client outcomes alongside shareholder returns. This positioning has become strategically valuable as institutional investors increasingly scrutinise governance and stakeholder alignment.

Third, the concept of “solution-agnostic” banking-which JPMorgan executives have explicitly articulated-reflects principles from systems thinking and complexity theory. Rather than constraining clients to a predetermined menu of products, solution-agnostic banking treats each client situation as unique and selects from the full array of available tools. This requires organisational flexibility, deep expertise across multiple domains, and a culture that rewards relationship managers for identifying optimal solutions rather than maximising individual product sales.

The Competitive Landscape: Distinguishing JPMorgan’s Approach

JPMorgan’s direct lending strategy, as articulated by Rohrbaugh, stands in sharp contrast to the approaches adopted by several competitors. Whilst some investment banks have pursued direct lending primarily as a capital deployment vehicle-seeking to generate attractive risk-adjusted returns through proprietary credit selection-JPMorgan has deliberately constrained its direct lending exposure to approximately $14 billion on its own balance sheet, with an announced capacity of up to $50 billion.

This measured approach reflects several strategic calculations. First, it acknowledges the late-cycle credit environment that prevailed in early 2026. Rohrbaugh himself noted that base market volatility remained significantly elevated compared to pre-COVID levels, creating conditions where credit risk was being systematically underpriced. By limiting direct lending exposure, JPMorgan reduced its vulnerability to the credit deterioration that subsequently materialised in certain segments of the private credit market.

Second, the emphasis on underwriting standards-Rohrbaugh noted that JPMorgan’s direct lending assets are underwritten using the same rigorous standards applied to its core commercial and industrial (CNI) lending book-reflects a commitment to through-the-cycle credit quality. This contrasts sharply with certain competitors who adopted more lenient underwriting standards to compete for market share in a competitive direct lending environment.

Third, the integration of direct lending within a broader relationship banking framework allows JPMorgan to maintain pricing discipline. Rather than competing on spread in a commoditised direct lending market, the bank can justify premium pricing by offering comprehensive solutions and relationship depth that pure-play lenders cannot replicate.

Intellectual Influences: Modern Banking Theory

The theoretical foundations underlying Rohrbaugh’s approach reflect the influence of several contemporary banking theorists and practitioners. Anat Admati and Martin Hellwig, in their influential work on bank regulation and systemic risk, have emphasised the importance of relationship banking in maintaining financial stability. Their research suggests that banks focused on long-term client relationships develop superior credit judgment and are less prone to the herding behaviour that characterises transaction-focused institutions.

Similarly, the work of Viral Acharya and others on the shadow banking system has highlighted the risks associated with non-bank lenders that lack the regulatory oversight and capital requirements imposed on traditional banks. By positioning JPMorgan’s direct lending within a regulated, capital-constrained framework, Rohrbaugh implicitly acknowledges these systemic considerations.

The concept of “ecosystem” that Rohrbaugh invokes also reflects contemporary thinking in platform economics and network effects. Scholars such as Geoffrey Parker, Marshall Van Alstyne, and Sangeet Paul Platform have documented how platform businesses create value through network effects-the phenomenon whereby the value of a platform increases as more participants join. Applied to investment banking, JPMorgan’s ecosystem strategy suggests that the bank’s value proposition strengthens as it deepens its integration with clients across multiple service dimensions.

Practical Implementation: The 2026 Strategic Framework

Rohrbaugh’s philosophy translated into concrete strategic initiatives during 2026. JPMorgan announced a $1.5 trillion Sustainable and Responsible Investment (SRI) initiative, representing a 50 per cent increase from its historical $1 trillion deployment across technology, healthcare, and diversified industries. This initiative exemplifies the ecosystem approach: rather than treating sustainable finance as a separate product line, JPMorgan integrated it across its lending, advisory, and capital markets capabilities.

The bank’s expansion of its direct lending capacity to $50 billion, coupled with approximately $25 billion in partner capital, reflected a deliberate strategy to position itself as a comprehensive credit solutions provider without pursuing asset accumulation for its own sake. This positioning proved prescient, as the private credit market experienced significant stress in subsequent months, with certain non-bank lenders facing liquidity challenges and valuation pressures.

JPMorgan’s guidance for 2026 reflected confidence in this strategy. The bank projected mid-teens growth in investment banking fees and markets revenue, with potential for high-teens growth if market conditions remained constructive. Critically, this guidance was premised not on direct lending profitability but on the halo effects generated by comprehensive client service.

The Broader Implications: A Paradigm Shift in Investment Banking

Rohrbaugh’s articulation of JPMorgan’s direct lending philosophy signals a potential paradigm shift in how leading investment banks conceptualise their competitive positioning. Rather than pursuing specialisation and product-line optimisation-the dominant strategy of the 1990s and 2000s-the most sophisticated institutions are returning to relationship banking principles whilst leveraging technology and data analytics to enhance execution.

This shift reflects several underlying forces. First, the commoditisation of traditional investment banking services-driven by technology, regulatory standardisation, and increased competition-has compressed margins on individual transactions. This creates incentives for banks to increase transaction frequency and breadth rather than optimising individual transaction profitability.

Second, the rise of alternative asset managers and non-bank lenders has fragmented the financial ecosystem, creating opportunities for traditional banks to position themselves as integrators and orchestrators of diverse capital sources. JPMorgan’s direct lending strategy, viewed through this lens, represents an attempt to maintain relevance in an increasingly fragmented financial landscape.

Third, the increasing sophistication of institutional clients-particularly large sponsors and multinational corporations-has created demand for integrated solutions that transcend traditional product boundaries. Clients increasingly expect their primary financial advisors to provide seamless access to debt capital, equity capital, advisory services, and treasury solutions. Banks that can deliver this integration command premium valuations and client loyalty.

Risk Considerations and Market Validation

Rohrbaugh’s confidence in JPMorgan’s approach was validated by subsequent market developments. During the period immediately following his February 2026 remarks, the private credit market experienced significant stress, with certain non-bank lenders facing liquidity challenges and forced asset sales. JPMorgan’s measured approach to direct lending-constrained exposure, rigorous underwriting, and relationship focus-positioned the bank to capitalise on opportunities whilst avoiding the losses that befell more aggressive competitors.

The bank’s emphasis on underwriting standards proved particularly valuable. As credit conditions deteriorated, the superior credit quality of JPMorgan’s direct lending portfolio provided a competitive advantage, enabling the bank to maintain client relationships and expand market share amongst sponsors seeking reliable capital sources.

Rohrbaugh’s statement that he was “shocked that people are shocked” by private credit market stress reflected a sophisticated understanding of late-cycle dynamics. Rather than viewing credit deterioration as a surprise, JPMorgan’s leadership had anticipated elevated credit risk and positioned the firm accordingly.

Conclusion: A Sustainable Model for Modern Investment Banking

Troy Rohrbaugh’s articulation of JPMorgan’s direct lending philosophy-emphasising ecosystem integration, halo effects, and portfolio velocity over asset accumulation-represents a coherent strategic framework for navigating the complexities of modern investment banking. By explicitly rejecting the asset-aggregation model that characterises certain competitors, JPMorgan positions itself as a relationship-centric institution capable of delivering comprehensive solutions to sophisticated clients.

This approach reflects deep theoretical foundations in relationship banking, stakeholder capitalism, and platform economics, whilst remaining grounded in practical considerations of credit risk management and competitive positioning. As the financial services industry continues to evolve, Rohrbaugh’s philosophy offers a template for how traditional investment banks can maintain relevance and profitability in an increasingly fragmented and competitive landscape.

References

1. https://fintool.com/news/jpmorgan-ubs-conference-2026-capital-markets-outlook

2. https://www.investing.com/news/stock-market-news/jpmorgans-rohrbaugh-optimistic-on-2026-investment-banking-outlook-93CH-4497226

3. https://fintool.com/news/jpmorgan-private-credit-warning-q1-guidance

4. https://www.trustfinance.com/blog/jpmorgan-positive-2026-investment-banking-outlook

5. https://www.stocktitan.net/sec-filings/JPM/8-k-jpmorgan-chase-co-reports-material-event-3dab6edaae1a.html

6. https://www.morningstar.com/news/marketwatch/2026022425/im-shocked-that-people-are-shocked-says-jpmorgan-executive-about-private-credit-meltdown

"We're doing a lot of lending. We're not doing it to develop assets, like that's not what we do. We're doing it to be in the ecosystem to create a halo effect with our clients and create velocity in our portfolios." - Quote: Troy Rohrbaugh - Co-CEO of JP Morgan Chase Commercial & Investment Bank

read more
Term: Markov model

Term: Markov model

“A Markov model is a statistical tool for stochastic (random) processes where the future state depends only on the current state, not the entire past history-this is the Markov Property or “memoryless” property, making them useful for modeling systems like weather, finance, etc.” – Markov model

A Markov model is a statistical tool for stochastic (random) processes where the future state depends only on the current state, not the entire past history. This defining characteristic is known as the Markov property or “memoryless” property, rendering it highly effective for modelling systems such as weather patterns, financial markets, speech recognition, and chronic diseases in healthcare.1,2,4,5

Core Principles and Components

The simplest form is the Markov chain, which represents systems with fully observable states. It models transitions between states using a transition matrix, where rows denote current states and columns indicate next states, with each row’s probabilities summing to one. Graphically, states are circles connected by arrows labelled with transition probabilities.1,2,4

Formally, for a discrete-time Markov chain, the probability of transitioning from state i to j is given by the transition matrix P, where P_{ij} = Pr(X_{t+1}=j \mid X_t = i). The state at time t follows Pr(X_t = j) = \sum_i Pr(X_{t-1} = i) P_{ij}.4

Advanced variants include Markov decision processes (MDPs) for decision-making in stochastic environments, incorporating actions and rewards, and partially observable MDPs (POMDPs) where states are not fully visible. These extend to fields like AI, economics, and robotics.1,7

Applications Across Domains

  • Finance: Predicting market crashes or stock price movements via transition probabilities from historical data.1,5
  • Healthcare: Modelling disease progression for economic evaluations of interventions.6
  • Machine Learning: Markov chain Monte Carlo (MCMC) for Bayesian inference and sampling complex distributions.3,4
  • Other: Weather forecasting, search algorithms, fault-tolerant systems, and speech processing.1,4,8

Key Theorist: Andrey Andreyevich Markov

The preeminent theorist behind the Markov model is Russian mathematician Andrey Andreyevich Markov (1856-1922), who formalised these concepts in probability theory. Born in Ryazan, Russia, Markov studied at St. Petersburg University under Pafnuty Chebyshev, a pioneer in probability. He earned his doctorate in 1884 and became a professor there, though academic rivalries with colleagues like Dmitri Mendeleev led to his resignation in 1905.5

Markov’s seminal work began in 1906 with his analysis of Pushkin’s novel Eugene Onegin, applying chains to model letter sequences and refute Chebyshev’s independence assumptions in language-a direct precursor to modern Markov chains. He generalised this to stochastic processes satisfying the memoryless property, publishing key papers from 1906-1913. His contributions underpin applications in statistics, physics, and computing, earning the adjective “Markovian.” Markov’s rigorous mathematical framework proved invaluable for modelling real-world random systems, influencing fields from Monte Carlo simulations to AI.2,4,5

Despite personal hardships, including World War I and the Russian Revolution, Markov’s legacy endures through the foundational Markov chains that enable tractable predictions in otherwise intractable systems.2,4

References

1. https://www.techtarget.com/whatis/definition/Markov-model

2. https://en.wikipedia.org/wiki/Markov_model

3. https://www.publichealth.columbia.edu/research/population-health-methods/markov-chain-monte-carlo

4. https://en.wikipedia.org/wiki/Markov_chain

5. https://blog.quantinsti.com/markov-model/

6. https://pubmed.ncbi.nlm.nih.gov/10178664/

7. https://labelstud.io/blog/markov-models-chains-to-choices/

8. https://ntrs.nasa.gov/api/citations/20020050518/downloads/20020050518.pdf

9. https://taylorandfrancis.com/knowledge/Engineering_and_technology/Industrial_engineering_&_manufacturing/Markov_models/

10. https://www.youtube.com/watch?v=d0xgyDs4EBc

"A Markov model is a statistical tool for stochastic (random) processes where the future state depends only on the current state, not the entire past history—this is the Markov Property or "memoryless" property, making them useful for modeling systems like weather, finance, etc." - Term: Markov model

read more
Quote: Arthur Mensch – Arthur Mensch – Mistral CEO

Quote: Arthur Mensch – Arthur Mensch – Mistral CEO

“In real life, enterprises are complex systems, and you can’t solve that with a single abstraction like AGI. AGI, to a large extent, is a north star of ‘I’m going to make the system better over time.'” – Arthur Mensch – Mistral CEO

Arthur Mensch, CEO of Mistral AI, offers a grounded perspective on artificial general intelligence (AGI), emphasising its role as an aspirational guide rather than a practical fix for intricate business challenges. In a recent Big Technology Podcast interview with Alex Kantrowitz on 16 January 2026, Mensch highlighted how enterprises function as complex systems that defy singular abstractions like AGI, positioning it instead as a directional ‘north star’ for incremental system improvements. This view aligns with his longstanding scepticism towards AGI hype, rooted in his self-described strong atheism and belief that such rhetoric equates to ‘creating God’1,2,3,4.

Who is Arthur Mensch?

Born in Paris, Arthur Mensch, aged 31, is a French entrepreneur and AI researcher who co-founded Mistral AI in 2023 alongside former Meta engineers Timothée Lacroix and Guillaume Lample. Before Mistral, Mensch worked as an engineer at Google DeepMind’s Paris lab, gaining expertise in advanced AI models2,4. His venture quickly rose to prominence, positioning Europe as a contender in the AI landscape dominated by US giants. Mistral’s models, including open-weight offerings, have secured partnerships like one with Microsoft in early 2024, while attracting support from the French government and investors such as former digital minister Cédric O2,4. Mensch advocates for a ‘European champion’ in AI to counterbalance cultural influences from American tech firms, stressing that AI shapes global perceptions and values2. He warns against over-reliance on US competitors for AI standards, pushing for lighter European regulations to foster innovation4.

Context of the Quote

Mensch’s statement emerges amid intensifying AI debates, just two days before this post, on a podcast discussing real-world AI applications. It reflects his consistent dismissal of AGI as an unattainable, quasi-religious pursuit, a stance he reiterated in a 2024 New York Times interview: ‘The whole AGI rhetoric is about creating God. I don’t believe in God. I’m a strong atheist. So I don’t believe in AGI’1,2,3,4. Unlike peers forecasting AGI’s imminent arrival, Mensch prioritises practical AI tools that enhance productivity, predicting rapid workforce retraining needs within two years rather than a decade4. He critiques Big Tech’s open-source strategies as competitive ploys and emphasises culturally attuned AI development1,2. This podcast remark builds on those themes, applying them to enterprise complexity where iterative progress trumps hypothetical superintelligence.

Leading Theorists on AGI and Complex Systems

The discourse around AGI and its limits in complex systems draws from pioneering theorists in AI, cybernetics, and systems theory.

  • Alan Turing (1912-1954): Laid AI foundations with his 1950 ‘Computing Machinery and Intelligence’ paper, proposing the Turing Test for machine intelligence. He envisioned machines mimicking human cognition but did not pursue god-like generality, focusing on computable problems[internal knowledge].
  • Norbert Wiener (1894-1964): Founder of cybernetics, which studies control and communication in animals and machines. In Cybernetics (1948), Wiener described enterprises and societies as dynamic feedback systems resistant to simple models, prefiguring Mensch’s complexity argument[internal knowledge].
  • John McCarthy (1927-2011): Coined ‘artificial intelligence’ in 1956 at the Dartmouth Conference, distinguishing narrow AI from general forms. He advocated high-level programming for generality but recognised real-world messiness[internal knowledge].
  • Demis Hassabis: Google DeepMind CEO and Mensch’s former colleague, predicts AGI within years, viewing it as AI matching human versatility across tasks. Hassabis emphasises multimodal learning from games like AlphaGo4[internal knowledge].
  • Sam Altman and Elon Musk: OpenAI’s Altman warns of AGI risks like ‘subtle misalignments’ while pursuing it as transformative; Musk forecasts superhuman AI by late 2025 and sues OpenAI over profit shifts3,4. Both treat AGI as epochal, contrasting Mensch’s pragmatism.

These figures highlight a divide: early theorists like Wiener stressed systemic complexity, while modern leaders like Hassabis chase generality. Mensch bridges this by favouring commoditised, improvable AI over AGI mythology[TAGS].

Implications for AI and Enterprise

Mensch’s philosophy underscores AI’s commoditisation, where models like Mistral’s drive efficiency without superintelligence. This resonates in Europe’s push for sovereign AI, amid tags like commoditisation and artificial intelligence[TAGS]. As enterprises navigate complexity, his ‘north star’ metaphor encourages sustained progress over speculative leaps.

References

1. https://www.businessinsider.com/mistrals-ceo-said-obsession-with-agi-about-creating-god-2024-4

2. https://futurism.com/the-byte/mistral-ceo-agi-god

3. https://www.benzinga.com/news/24/04/38266018/mistral-ceo-shades-openais-sam-altman-says-obsession-with-reaching-agi-is-about-creating-god

4. https://fortune.com/europe/article/mistral-boss-tech-ceos-obsession-ai-outsmarting-humans-very-religious-fascination/

5. https://www.binance.com/en/square/post/6742502031714

6. https://www.christianpost.com/cartoon/musk-to-altman-what-are-tech-moguls-saying-about-ai-and-agi.html?page=5

"In real life, enterprises are complex systems, and you can’t solve that with a single abstraction like AGI. AGI, to a large extent, is a north star of 'I’m going to make the system better over time.'" - Quote: Arthur Mensch

read more
Quote: Andrej Karpathy – Previously Director of AI at Tesla, founding team at OpenAI

Quote: Andrej Karpathy – Previously Director of AI at Tesla, founding team at OpenAI

“Programming is becoming unrecognizable. You’re not typing computer code into an editor like the way things were since computers were invented, that era is over. You’re spinning up AI agents, giving them tasks in English and managing and reviewing their work in parallel.” – Andrej Karpathy – Previously Director of AI at Tesla, founding team at OpenAI

This statement captures a pivotal moment in the evolution of software development, where traditional coding practices are giving way to a new era dominated by AI agents. Spoken by Andrej Karpathy, a visionary in artificial intelligence, it reflects the rapid transformation driven by large language models (LLMs) and autonomous systems. Karpathy’s insight underscores how programming is shifting from manual code entry to orchestrating intelligent agents via natural language, marking the end of an era that began with the earliest computers.

About Andrej Karpathy

Andrej Karpathy is a leading figure in AI, renowned for his contributions to deep learning and computer vision. A founding member of OpenAI in 2015, he played a key role in pioneering advancements in generative models and neural networks. Later, as Director of AI at Tesla, he led the Autopilot vision team, developing autonomous driving technologies that pushed the boundaries of real-world AI deployment. Today, he is building Eureka Labs, an AI-native educational platform. His talks and writings, such as ‘Software Is Changing (Again),’ articulate the shift to ‘Software 3.0,’ where LLMs enable programming in natural language like English.123

Karpathy’s line struck a nerve because it didn’t describe a distant future. It sounded like a description of what many engineers were already starting to experience in early 2026. The shift he’s talking about is less about writing code and more about orchestrating work—breaking problems into pieces, describing them in plain language, and then supervising agents that actually execute them.

The February Leap: Codex 5.2 and Claude Code

What made this moment feel like a real inflection was the quality jump in early 2026. When tools like ChatGPT Codex 5.2 and Claude Code landed in February, they weren’t just “better autocomplete.” They could stay on task for long, multi?step workflows, recover from errors, and push through the kind of friction that used to send developers back to the keyboard.

Karpathy has described this himself: coding agents that “basically didn’t work before December and basically work since,” with noticeably higher quality, long?term coherence, and tenacity. The February releases crystallised that shift. What used to be a weekend project became something you could kick off, let the agent run for 20–30 minutes, and then review—all while thinking about the next layer of the system rather than the syntax of the current one.

A New Kind of Programming Workflow

The pattern Karpathy is describing is less “pair programming with an autocomplete” and more “manager?style delegation.” You frame a task in English, give the agent context, tools, and constraints, and then let it run multiple steps in parallel—installing dependencies, writing tests, debugging, and even documenting the outcome. You then review outputs, steer the next round, and gradually refine the agent’s instructions.

This isn’t a replacement for engineering judgment. It’s a layer on top: your job becomes decomposing work, defining what success looks like, and deciding which parts to hand off and which to keep close. The “productivity flywheel” turns faster when you can treat the agent as a high?leverage assistant that can keep going while you move up the stack.

Software 3.0, In Practice

Karpathy has long framed this as Software 3.0—the evolution of programming from:

  • Software 1.0: explicit code written in languages like C++ or Python, where the programmer spells out every step.

  • Software 2.0: neural networks trained on data, where the “program” is a dataset and training objective rather than a long list of rules.

  • Software 3.0: natural?language?driven agents that compose systems, debug problems, and manage long?running workflows, while still relying on 1.0 and 2.0 components underneath.

The February releases of Codex 5.2 and Claude Code made Software 3.0 feel tangible. It’s no longer a thought experiment; it’s something practitioners can use today for tasks that are well?specified and easy to verify—infrastructure setup, data pipelines, internal tooling, and boilerplate?heavy workflows.

What This Means for Practitioners

The implication isn’t that “everyone will be a programmer.” It’s that the nature of programming is changing. The most valuable skills are no longer just fluency in a language, but:

  • Decomposing complex work into agent?friendly tasks,

  • Designing interfaces and documentation that models can use effectively,

  • Building feedback loops and guardrails so agents can operate safely, and

  • Knowing when to lean in (complex, under?specified logic) and when to lean out (repetitive, well?structured work).

Karpathy’s point is that the default workflow is no longer “you write code line by line.” The era where the editor is the center of the universe is ending. Programming is becoming less about keystrokes and more about direction, oversight, and iteration—with AI agents as the new layer of execution in between.

Leading Theorists and Influences

Karpathy’s views draw from pioneers in AI and agents. Ilya Sutskever, his OpenAI co-founder, advanced sequence models like GPT, enabling natural language programming. At Tesla, Ashok Elluswamy and the Autopilot team influenced his emphasis on human-AI loops and ‘autonomy sliders.’ Broader influences include Andrew Ng, under whom Karpathy studied at Stanford, popularising deep learning education, and Yann LeCun, whose convolutional networks underpin vision AI. Recent agentic work echoes Yohei Nakajima’s BabyAGI (2023), an early autonomous agent framework, and Microsoft’s AutoGen for multi-agent systems. Karpathy positions agents as a new ‘consumer of digital information,’ urging infrastructure redesign for LLM autonomy.123

Implications for the Future

This shift promises unprecedented productivity but demands new skills: fluency across paradigms, agent management, and ‘applied psychology of neural nets.’ As Karpathy notes, ‘everyone is now a programmer’ via English, yet professionals must build for agents – rewriting codebases and creating agent-friendly interfaces. With LLM capabilities surging by late 2025, 2026 heralds a ‘high energy’ phase of industry adaptation.14

 

References

1. https://www.businessinsider.com/agentic-engineering-andrej-karpathy-vibe-coding-2026-2

2. https://www.youtube.com/watch?v=LCEmiRjPEtQ

3. https://singjupost.com/andrej-karpathy-software-is-changing-again/

4. https://paweldubiel.com/42l1%E2%81%9D–Andrej-Karpathy-quote-26-Jan-2026-

5. https://www.christopherspenn.com/2024/07/mind-readings-generative-ai-as-a-programming-language/

6. https://www.ycombinator.com/library/MW-andrej-karpathy-software-is-changing-again

7. https://karpathy.ai/tweets.html

 

"Programming is becoming unrecognizable. You’re not typing computer code into an editor like the way things were since computers were invented, that era is over. You're spinning up AI agents, giving them tasks in English and managing and reviewing their work in parallel." - Quote: Andrej Karpathy - Previously Director of AI at Tesla, founding team at OpenAI

read more

Download brochure

Introduction brochure

What we do, case studies and profiles of some of our amazing team.

Download

Our latest podcasts on Spotify

Sign up for our newsletters - free

Global Advisors | Quantified Strategy Consulting