Select Page

News and Tools

Business News Select

 

A daily bite-size selection of top business content.

Quote: Abraham Lincoln – American president

Quote: Abraham Lincoln – American president

“I’m a success today because I had a friend who believed in me and I didn’t have the heart to let him down” – Abraham Lincoln – American president

Abraham Lincoln’s reflection on success reveals a fundamentally relational understanding of achievement-one that stands in stark contrast to the individualistic narratives that often dominate discussions of personal accomplishment. By attributing his success not to his own talents or efforts, but to a friend’s belief in him, Lincoln articulates a philosophy that places human connection and moral accountability at the centre of meaningful achievement.1

The Context of Lincoln’s Philosophy

Lincoln’s words carry particular weight when considered against the trajectory of his own life. Born on 12 February 1809 in a log cabin in Kentucky, he emerged from profound poverty with minimal formal education.1 His early years were marked by repeated failures and setbacks-experiences that might have extinguished ambition in lesser individuals. Yet Lincoln persisted, working as a postmaster, surveyor, shopkeeper, and eventually lawyer, roles that kept him intimately connected to ordinary people and their struggles.1 This grounding in common experience proved formative to his character and his understanding of what success truly meant.

When Lincoln rose to the presidency in 1861, he inherited a nation fractured by the slavery question and on the precipice of civil war. The crucible of the American Civil War would test his definition of success in the most severe manner imaginable. In this context, success could not be measured by personal acclaim or political victory alone. Instead, it demanded the preservation of the Union, the abolition of slavery, and the maintenance of democratic principles-objectives that required extraordinary moral courage and an unwavering commitment to principles despite immense personal and political cost.1

The Philosophy Behind the Quote

Lincoln’s statement reveals several interconnected philosophical commitments. First, it emphasises the role of encouragement and moral support in sustaining perseverance through hardship.1 The friend who believed in him functioned not merely as a cheerleader, but as a source of validation that made continued effort possible when circumstances might otherwise have counselled surrender.

Second, the phrase “I didn’t have the heart to let him down” points to something deeper than mere gratitude. It speaks to accountability, loyalty, and character as the true drivers of achievement.1 For Lincoln, success was not primarily about personal gain or self-realisation; it was about honouring the trust that others had placed in him. This transforms success from an individual metric into a shared responsibility-a covenant between the person striving and those who have invested belief in their potential.

Third, Lincoln’s formulation suggests that success is fundamentally a shared journey, built on belief, responsibility, and the strength drawn from knowing someone stood by you when it mattered most.1 This perspective inverts the typical hierarchy of achievement. Rather than the successful individual standing alone at the summit, Lincoln positions himself as part of a web of mutual obligation and interdependence.

Intellectual Foundations and Related Thought

Lincoln’s philosophy of relational success anticipated themes that would become central to later philosophical and psychological inquiry. His emphasis on the role of belief and encouragement in human development prefigures contemporary research in social psychology and developmental theory, which has consistently demonstrated that external validation and social support are crucial factors in determining whether individuals persist through challenges or abandon their aspirations.

The concept of accountability to others as a motivating force also resonates with virtue ethics traditions, which emphasise character development through relationships and community. Rather than viewing morality and achievement as matters of individual will or rational calculation, virtue ethics-rooted in Aristotelian philosophy-understands human flourishing as inherently social, developed through habituation within communities of practice and mutual accountability.

Lincoln’s thinking also aligns with what later thinkers would call the “relational self”-the understanding that identity and capability are not fixed, autonomous properties but are continually constituted through relationships with others. This stands in contrast to the Enlightenment emphasis on the autonomous, rational individual that dominated much nineteenth-century thought.

The Broader Context of Lincoln’s Thought on Character

This quote sits within a larger body of Lincoln’s reflections on character, responsibility, and human nature. His statement that “Character is like a tree and reputation its shadow” suggests a similar philosophy: what matters is the inner reality of one’s character, not the external appearance of success.6 His observation that “Nearly all men can stand adversity, but if you want to test a man’s character, give him power” reveals his conviction that true character is revealed not in comfortable circumstances but in how one exercises authority and influence.4

Lincoln’s emphasis on the moral dimensions of success also appears in his assertion that “You cannot escape the responsibility of tomorrow by evading it today.”4 This captures his understanding that success requires not merely present effort but a sustained commitment to future obligations-a temporal extension of the accountability he emphasises in the quote about his friend.

The Enduring Relevance

Lincoln’s philosophy of success remains profoundly relevant in contemporary contexts that often celebrate individual achievement and self-made narratives. His insistence that success is relational-that it depends fundamentally on the belief and support of others-offers a corrective to narratives that obscure the social foundations of individual accomplishment. In doing so, it invites reflection on the networks of support, privilege, and mutual obligation that enable any individual’s rise, and on the reciprocal responsibilities that success entails.

The quote also speaks to the question of motivation and meaning. In a culture that often measures success by external markers-wealth, status, power-Lincoln’s definition redirects attention to internal measures: the integrity of honouring trust, the dignity of loyalty, and the satisfaction of living up to the belief others have placed in you. This reframing suggests that the deepest forms of success are those that align personal achievement with relational responsibility.

References

1. https://economictimes.com/us/news/quote-of-the-day-by-abraham-lincoln-im-a-success-today-because-i-had-a-friend-who-believed-in-me-and-i-didnt-have-the-heart-to-let-him-down/articleshow/126639131.cms

2. https://quotefancy.com/quote/2126/Abraham-Lincoln-I-m-a-success-today-because-I-had-a-friend-who-believed-in-me-and-I-didn

3. https://www.goodreads.com/quotes/28587-i-m-a-success-today-because-i-had-a-friend-who

4. https://quotes.lifehack.org/quotes/abraham_lincoln_58626

5. https://mitchmatthews.com/take-a-lesson-from-abraham-lincoln-and-help-someone-else-to-dream-big-and-achieve-more/

6. https://www.nextlevel.coach/blog/abraham-lincoln-quotes-on-leadership

"I'm a success today because I had a friend who believed in me and I didn't have the heart to let him down" - Quote: Abraham Lincoln

read more
Term: Recursive Language Model (RLM)

Term: Recursive Language Model (RLM)

“A Recursive Language Model (RLM) is an AI inference strategy where a large language model (LLM) is granted the ability to programmatically interact with and recursively call itself or smaller helper models to solve complex tasks and process extremely long inputs.” – Recursive Language Model (RLM)

A **Recursive Language Model (RLM)** is an innovative inference strategy that empowers large language models (LLMs) to treat input contexts not as static strings but as dynamic environments they can actively explore, decompose, and recursively process.1,3,4 This approach fundamentally shifts AI from passive text processing to active problem-solving, enabling the handling of extremely long inputs, complex reasoning tasks, and structured outputs without being constrained by traditional context window limits.1,6

At its core, an RLM operates within a Python Read-Eval-Print Loop (REPL) environment where the input context is stored as a programmable variable.1,2,3 The model begins with exploration and inspection, using tools like string slicing, regular expressions, and keyword searches to scan and understand the data structure actively rather than passively reading it.1 It then performs task decomposition, breaking the problem into smaller subtasks that fit within standard context windows, with the model deciding the splits based on its discoveries.1,3

The hallmark is recursive self-calls, where the model invokes itself (or smaller helper models) on each subtask, forming a tree of reasoning that aggregates partial results into variables within the REPL.1,4 This is followed by aggregation and synthesis, combining outputs programmatically into lists, tables, or documents, and verification and self-checking through re-runs or cross-checks for reliability.1 Unlike traditional LLMs that process a single forward pass on tokenised input, RLMs grant the model ‘hands and eyes’ to query itself programmatically, such as result = rlm_query(sub_prompt), transforming context from ‘input’ to ‘environment’.1,3

RLMs address key limitations like ‘context rot’-degradation in long-context performance-and scale to effectively unlimited lengths (over 10 million tokens tested), outperforming baselines by up to 114% on benchmarks without fine-tuning, via prompt engineering alone.3,6,2 They differ from agentic systems by decomposing context adaptively rather than predefined tasks, and from reasoning models by scaling through recursive decomposition.6

Key Theorist: Alex L. Zhang and the MIT Origins

The primary theorist behind RLMs is **Alex L. Zhang**, a researcher affiliated with MIT, who co-authored the seminal work proposing RLMs as a general inference framework.3,4,8 In his detailed blog and the arXiv paper ‘Recursive Language Models’ (published around late 2025), Zhang articulates the vision: enabling LLMs to ‘recursively call themselves or other LLMs’ to process unbounded contexts and mitigate degradation.3,4 His implementation uses GPT-5 or GPT-5-mini in a Python REPL, allowing adaptive chunking and recursion at test time.3

Alex L. Zhang’s biography reflects a deep expertise in AI scaling and inference innovations. Active in 2025 through platforms like his GitHub blog (alexzhang13.github.io), he focuses on practical advancements in language model capabilities, particularly long-context handling.3 While specific early career details are sparse in available sources, his work builds on MIT’s disruptive ethos-echoed in proposals like ‘why not let the model read itself?’-positioning him as a key figure in the 2026 paradigm shift towards recursive AI architectures.1,8 Zhang’s contributions emphasise test-time compute scaling, distinguishing RLMs from mere architectural changes by framing them as a ‘thin wrapper’ around standard LLMs that reframes them as stateful programmes.5

Experimental validations in Zhang’s framework demonstrate RLMs’ superiority, such as dramatically improved accuracy on pairwise comparison tasks (from near-zero to over 58%) and spam classification in massive prompts.2,4 His ideas have sparked widespread discussion, with sources hailing RLMs as ‘the ultimate evolution of AI’ and a ‘game-changer for 2026’.1,2,7

References

1. https://gaodalie.substack.com/p/rlm-the-ultimate-evolution-of-ai

2. https://www.oreateai.com/blog/the-rise-of-recursive-language-models-a-game-changer-for-2026/0fee0de5cdd99689fca9e499f6333681

3. https://alexzhang13.github.io/blog/2025/rlm/

4. https://arxiv.org/html/2512.24601v1

5. https://datasciencedojo.com/blog/what-are-recursive-language-models/

6. https://www.getmaxim.ai/blog/breaking-the-context-window-how-recursive-language-models-handle-infinite-input/

7. https://www.primeintellect.ai/blog/rlm

8. https://www.theneuron.ai/explainer-articles/recursive-language-models-rlms-the-clever-hack-that-gives-ai-infinite-memory

"A Recursive Language Model (RLM) is an AI inference strategy where a large language model (LLM) is granted the ability to programmatically interact with and recursively call itself or smaller helper models to solve complex tasks and process extremely long inputs." - Term: Recursive Language Model (RLM)

read more
Quote: George Bernard Shaw

Quote: George Bernard Shaw

“The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man.” – George Bernard Shaw – Irish playwright

George Bernard Shaw (1856–1950), the Irish playwright, critic, and Nobel laureate, originated this quote in his 1903 play Man and Superman, specifically in the section “Maxims for Revolutionists.”1,3 Shaw, born in Dublin to a Protestant family amid economic hardship, moved to London in 1876, where he became a leading figure in the Fabian Society—a socialist group advocating gradual reform over revolution—and penned over 60 plays blending wit, philosophy, and social critique.3

Context of the Quote

The line appears in Man and Superman, a philosophical comedy subtitled “A Comedy and a Philosophy,” which explores themes of human evolution, will, and societal progress through the character of John Tanner, a revolutionary dreamer pursuing (and fleeing) the spirited Ann Whitefield.1 In “Maxims for Revolutionists,” Shaw distills provocative ideas on human nature, arguing that progress requires challenging the status quo rather than conforming to it. The “reasonable man” accepts the world as is, ensuring stability but stagnation; the “unreasonable man” imposes his vision, driving innovation despite resistance.1,2,3 Shaw, a Fabian socialist who favored incremental change via education and agitation, used the maxim to celebrate disruptive persistence as essential to societal advancement, echoing his belief in remolding the world “nearer to the heart’s desire.”4

This idea resonated widely: it inspired sales leaders viewing “unreasonableness” as bold action against excuses2; marketers urging challenge over compromise amid populism4; and even Hacker News debates contrasting revolution with evolution5. It also titled John Elkington and Pamela Hartigan’s 2008 book The Power of Unreasonable People, profiling social and environmental entrepreneurs who create markets for change.6

Shaw’s Backstory

Shaw rejected conventional jobs, surviving as a music and theater critic under pseudonyms like “Corno di Bassetto” while writing novels that flopped. His breakthrough came with plays like Mrs. Warren’s Profession (1893), censored for exposing prostitution’s economic roots, and Pygmalion (1913), later adapted into My Fair Lady. A vegetarian, teetotaler, and spelling reformer, Shaw won the 1925 Nobel Prize in Literature but donated the money for translations of August Strindberg. Politically, he supported women’s suffrage, Irish Home Rule, and eugenics (later controversial), and endorsed Soviet experiments while critiquing capitalism. At 94, he broke his hip falling from a ladder while pruning a tree, dying soon after. His works, blending Shavian wit with Nietzschean vitality, remain staples for dissecting power, class, and human drive.3,4

Leading Theorists on Unreasonableness, Progress, and Adaptation

Shaw’s maxim draws from and influenced thinkers on innovation, disruption, and social change. Key figures include:

  • Fabian Society Influentials (Shaw’s Circle): Shaw co-founded this gradualist socialist group in 1884, named after Roman general Quintus Fabius Maximus Verrucosus (the “Delayer”), who used attrition over direct battle. Sidney and Beatrice Webb advanced “permeation”—infiltrating elites for reform—while Annie Besant agitated for labor rights. Their motto, “educate, agitate, organize,” embodied reasoned persistence against orthodoxy, mirroring Shaw’s “unreasonable” drive within structured evolution.4

  • Friedrich Nietzsche (1844–1900): The German philosopher’s concepts of the Übermensch (overman) and will to power prefigure Shaw’s rebel, urging transcendence of herd morality. In Thus Spoke Zarathustra (1883–1885), Nietzsche celebrates creators who affirm life against nihilistic conformity, influencing Shaw’s evolutionary Superman.3 (Inferred link via shared themes in Shaw’s play.)

  • Social Entrepreneurs (Modern Applications): Elkington and Hartigan highlight “unreasonable” innovators like Muhammad Yunus (Grameen Bank microfinance) and Wendy Kopp (Teach For America), who built markets defying poverty and education norms. Their 2008 book frames Shaw’s idea as a blueprint for systemic change via audacious markets.6

  • Critics and Counter-Theorists: Hacker News commenter “vph” argues the quote overstates revolution, crediting evolution—incremental, “reasonable” adaptation—for true progress, citing Darwinian biology over rupture.5 Jim Carroll contrasts it with Fabian delay tactics, warning prudence yields modest fruit while unreasonableness risks chaos.4

Shaw’s maxim endures as a rallying cry for visionaries, underscoring that all progress depends on the unreasonable man by forcing adaptation on a resistant world.1,2

References

1. https://www.goodreads.com/quotes/536961-the-reasonable-man-adapts-himself-to-the-world-the-unreasonable

2. https://thesalesmaster.wordpress.com/the-unreasonable-man/

3. https://www.quotationspage.com/quote/692.html

4. https://www.jimcarrollsblog.com/blog/2017/1/4/all-progress-depends-on-the-unreasonable-man-george-bernard-shaws-lessons-on-change

5. https://news.ycombinator.com/item?id=5071748

6. https://en.wikipedia.org/wiki/The_Power_of_Unreasonable_People

"The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man." - Quote: George Bernard Shaw

read more
Quote: Jensen Huang – Nvidia CEO

Quote: Jensen Huang – Nvidia CEO

“OpenClaw is probably the single most important release of software, probably ever. If you look at… the adoption of it, Linux took some 30 years to reach this level. OpenClaw has now surpassed Linux. It is now the single most downloaded open source software in history, and it took 3 weeks.” – Jensen Huang – Nvidia CEO

In a striking declaration at the Morgan Stanley Technology, Media and Telecom Conference in San Francisco, Nvidia CEO Jensen Huang positioned OpenClaw as a revolutionary force in open source software, outpacing even the legendary Linux kernel in adoption speed and scale.5 This remark underscores Huang’s vision for AI agents – autonomous systems capable of continuous operation and complex tasks – as the next frontier in artificial intelligence, with OpenClaw serving as their foundational framework.5

Context of the Quote

Delivered on 4 March 2026, Huang’s comments came amid discussions on Nvidia’s strategic investments in AI leaders like OpenAI and Anthropic, where he noted that recent deals, including a $30 billion stake in OpenAI, might represent the company’s final major private investments before these firms pursue initial public offerings.1,2,3,5,6 Amid this, Huang pivoted to OpenClaw’s meteoric rise, contrasting its three-week dominance in downloads against Linux’s three-decade journey to similar prominence.5 He highlighted its ‘vertical’ growth on semi-log charts, attributing this to the insatiable demand for AI agents that process a million times more tokens and run perpetually in enterprise environments.5

Who is Jensen Huang?

Jensen Huang co-founded Nvidia in 1993 alongside Chris Malachowsky and Curtis Priem, initially focusing on graphics processing units (GPUs) for gaming and visualisation.4 Under his leadership, Nvidia pivoted decisively to AI and high-performance computing, with breakthroughs like CUDA – a parallel computing platform that locks in developers through its ecosystem of software, interconnects like NVLink, and rack-scale systems.4 Huang’s prescience in positioning GPUs as indispensable for AI training and inference has propelled Nvidia to a market leader, with hyperscalers committing over $660 billion in AI spending for 2026 alone.4 His conference appearances, including this one, blend investment insights with technological evangelism, reinforcing Nvidia’s moat in the AI stack.1,3,4,5

What is OpenClaw?

OpenClaw emerges as Nvidia’s open source initiative tailored for AI agents – intelligent, persistent programmes that autonomously handle tasks such as software development, tool creation, and data processing.5 Unlike traditional software, these agents operate continuously, consuming vast token volumes (a measure of computational language processing) and integrating seamlessly into workflows.5 Huang’s team deploys numerous OpenClaw instances internally, automating coding and innovation, which explains the explosive download figures: surpassing Linux – the cornerstone of servers, supercomputers, and embedded systems – in just three weeks.5 This positions OpenClaw not merely as code, but as infrastructure for the agentic AI era, where autonomy scales intelligence.

Backstory: Linux’s Enduring Legacy

To grasp OpenClaw’s feat, consider Linux’s trajectory. Initiated in 1991 by Linus Torvalds as a hobby project, Linux evolved into the world’s most ubiquitous operating system kernel, powering 96% of the top supercomputers, most cloud infrastructure, and Android devices.5 Its adoption spanned three decades, driven by open source principles, community contributions, and enterprise embrace from IBM to Google. Yet, as Huang noted, even this benchmark took 30 years to cement Linux as a download and deployment juggernaut.5 OpenClaw’s subversion of this timeline signals a paradigm shift: AI-driven tools now accelerate adoption via immediate utility in high-stakes domains like enterprise AI.

Leading Theorists in AI Agents and Open Source AI

  • Linus Torvalds: Architect of Linux, Torvalds pioneered collaborative open source development via Git, influencing every major software ecosystem. His ‘benevolent dictator’ governance model ensured Linux’s stability and growth, principles echoed in modern AI repositories.5
  • Ilya Sutskever: Co-founder of OpenAI and key figure in transformer models (the backbone of agents), Sutskever’s work on scaling laws demonstrated how compute and data yield emergent intelligence, paving the way for agentic systems like those powered by OpenClaw.
  • Andrej Karpathy: Former OpenAI and Tesla AI director, Karpathy advanced accessible AI through nanoGPT and LLM training tutorials, theorising agent swarms – multi-agent collaborations – that align with Huang’s vision of continuous, token-hungry OpenClaw deployments.
  • Yohei Nakajima: Creator of BabyAGI, an early agent framework, Nakajima theorised task decomposition and self-improvement loops, concepts central to OpenClaw’s real-world utility in software engineering and beyond.
  • Sam Altman: OpenAI CEO, Altman champions ‘agentic AI’ as the post-ChatGPT phase, where models act independently. Despite tensions in Nvidia partnerships, his firm’s trajectory validates Huang’s infrastructure bets.1,2,3

Huang’s endorsement frames OpenClaw as the synthesis of these ideas: open source velocity meets agentic scale, challenging developers to harness AI’s full potential.

Implications for AI and Open Source

OpenClaw’s ascent heralds a compression of innovation cycles, where AI agents bootstrap their own ecosystems faster than human-led projects like Linux.5 For investors and technologists, it reinforces Nvidia’s centrality: not just in hardware, but in software that cements lock-in.4 As agents proliferate – writing code, optimising systems, and driving revenue – Huang’s words invite scrutiny of whether this marks the true democratisation of AI, or Nvidia’s deepening dominance in the field.1,4,5

References

1. https://www.mexc.com/news/855185

2. https://finviz.com/news/330373/jensen-huang-says-nvidias-30-billion-openai-investment-might-be-the-last-before-ipo

3. https://techcrunch.com/2026/03/04/jensen-huang-says-nvidia-is-pulling-back-from-openai-and-anthropic-but-his-explanation-raises-more-questions-than-it-answers/

4. https://www.thestreet.com/investing/morgan-stanley-changes-its-nvidia-position-for-the-rest-of-2026

5. https://ng.investing.com/news/transcripts/nvidia-at-morgan-stanley-conference-ai-leadership-and-strategic-growth-93CH-2375443

6. https://ppam.com.au/nvidia-ceo-huang-says-30-billion-openai-investment-might-be-the-last/

7. https://www.tmtbreakout.com/p/ms-tmt-conf-nvidias-jensen-nvda-microsofts

"OpenClaw is probably the single most important release of software, probably ever. If you look at... the adoption of it,  Linux took some 30 years to reach this level. OpenClaw has now surpassed Linux. It is now the single most downloaded open source software in history, and it took 3 weeks." - Quote: Jensen Huang - Nvidia CEO

read more
Term: Mixture of Experts (MoE)

Term: Mixture of Experts (MoE)

“Mixture of Experts (MoE) is an efficient neural network architecture that uses multiple specialised sub-models (experts) and a gating network (router) to dynamically select and activate only the most relevant experts for a given input.” – Mixture of Experts (MoE)

This architectural approach divides a large artificial intelligence model into separate sub-networks, each specialising in processing specific types of input data. Rather than activating the entire network for every task, MoE models employ a gating mechanism-often called a router-that intelligently selects which experts should process each input. This selective activation introduces sparsity into the network, meaning only a fraction of the model’s total parameters are used for any given computation.1,3

Core Architecture and Components

The fundamental structure of MoE consists of two essential elements:4

  • Expert networks: Multiple specialised sub-networks, typically implemented as feed-forward neural networks (FFNs), each with its own set of learnable parameters. These experts become skilled at handling specific patterns or types of data during training.1
  • Gating network (router): A trainable mechanism that evaluates each input and determines which expert or combination of experts is best suited to process it. This routing function is computationally efficient, enabling the model to make rapid decisions about expert selection.1,3

In practical implementations, such as the Mixtral 8x7B language model, each layer contains multiple experts-for instance, eight separate feedforward blocks with 7 billion parameters each. For every token processed, the router selects only a subset of these experts (in Mixtral’s case, two out of eight) to perform the computation, then combines their outputs before passing the result to the next layer.3

How MoE Achieves Efficiency

MoE models leverage conditional computation to reduce computational burden without sacrificing model capacity.3 This approach enables several efficiency gains:

  • Models can scale to billions of parameters whilst maintaining manageable inference costs, since not all parameters are activated for every input.1,3
  • Training can occur with significantly less compute, allowing researchers to either reduce training time or expand model and dataset sizes.4
  • Experts can be distributed across multiple devices through expert parallelism, enabling efficient large-scale deployments.1

The gating mechanism ensures that frequently selected experts receive continuous updates during training, improving their performance, whilst load balancing mechanisms attempt to distribute computational work evenly across experts to prevent bottlenecks.1

Historical Development and Key Theorist: Noam Shazeer

Noam Shazeer stands as the primary architect of modern MoE systems in deep learning. In 2017, Shazeer and colleagues-including the legendary Geoffrey Hinton and Google’s Jeff Dean-introduced the Sparsely-Gated Mixture-of-Experts Layer for recurrent neural language models.1,4 This seminal work fundamentally transformed how researchers approached scaling neural networks.

Shazeer’s contribution was revolutionary because it reintroduced the mixture of experts concept, which had existed in earlier machine learning literature, into the deep learning era. His team scaled this architecture to a 137-billion-parameter LSTM model, demonstrating that sparsity could maintain very fast inference even at massive scale.4 Although this initial work focused on machine translation and encountered challenges such as high communication costs and training instabilities, it established the theoretical and practical foundation for all subsequent MoE research.4

Shazeer’s background as a researcher at Google positioned him at the intersection of theoretical machine learning and practical systems engineering. His work exemplified a crucial insight: that not all parameters in a neural network need to be active simultaneously. This principle has since become foundational to modern large language model design, influencing architectures used by leading AI organisations worldwide. The Sparsely-Gated Mixture-of-Experts Layer introduced the trainable gating network concept that remains central to MoE implementations today, enabling conditional computation that balances model expressiveness with computational efficiency.1

Applications and Performance

MoE architectures have demonstrated faster training and comparable or superior performance to dense language models on many benchmarks, particularly in multi-domain tasks where different experts can specialise in different knowledge areas.1 Applications span natural language processing, computer vision, and recommendation systems.2

Challenges and Considerations

Despite their advantages, MoE systems present implementation challenges. Load balancing remains critical-when experts are distributed across multiple devices, uneven expert selection can create memory and computational bottlenecks, with some experts handling significantly more tokens than others.1 Additionally, distributed training complexity and the need for careful tuning to maintain stability and efficiency require sophisticated engineering approaches.1

References

1. https://neptune.ai/blog/mixture-of-experts-llms

2. https://www.datacamp.com/blog/mixture-of-experts-moe

3. https://www.ibm.com/think/topics/mixture-of-experts

4. https://huggingface.co/blog/moe

5. https://newsletter.maartengrootendorst.com/p/a-visual-guide-to-mixture-of-experts

6. https://www.youtube.com/watch?v=sYDlVVyJYn4

7. https://arxiv.org/html/2503.07137v1

8. https://cameronrwolfe.substack.com/p/moe-llms

"Mixture of Experts (MoE) is an efficient neural network architecture that uses multiple specialised sub-models (experts) and a gating network (router) to dynamically select and activate only the most relevant experts for a given input." - Term: Mixture of Experts (MoE)

read more
Term: AI harness

Term: AI harness

“A harness (often called an agent harness or agentic harness) is an external software framework that wraps around a Large Language Model (LLM) to make it functional, durable, and capable of taking actions in the real world.” – AI harness

An AI harness is the external software framework that wraps around a Large Language Model (LLM) to extend its capabilities beyond text generation, enabling it to function as a persistent, tool-using agent capable of taking real-world actions. Without a harness, an LLM operates in isolation-processing a single prompt and generating a response with no memory of previous interactions and no ability to interact with external systems. The harness solves this fundamental limitation by providing the infrastructure necessary for autonomous, multi-step reasoning and execution.

Core Functions and Architecture

An AI harness performs several critical functions that transform a static language model into a dynamic agent. Memory management addresses one of the most significant constraints of raw LLMs: their fixed context windows and lack of persistent memory. Standard language models begin each session with no recollection of previous interactions, forcing them to operate without historical context. The harness implements memory systems-including persistent context logs, summaries, and external knowledge stores-that carry information across sessions, enabling the agent to learn from past experiences and maintain continuity across multiple interactions.

Tool execution and external action represents another essential function. Language models alone can only produce text; they cannot browse the web, execute code, query databases, or generate images. The harness monitors the model’s output for special tool-call commands and executes those operations on the model’s behalf. When a tool call is detected, the harness pauses text generation, executes the requested operation in the external environment (such as performing a web search or running code in a sandbox), and feeds the results back into the model’s context. This mechanism effectively gives the model “hands and eyes,” transforming textual intentions into tangible real-world actions.

Context management and orchestration ensure that information flows efficiently between the model and its environment. The harness determines what information is provided to the model at each step, managing the transient prompt whilst maintaining a persistent task log separate from the model’s immediate context. This separation is crucial for long-running projects: even if an AI agent instance stops and a new one begins later with no memory in the raw LLM, the project itself retains memory through files and logs maintained by the harness.

Modular Design and Components

Contemporary harness architectures increasingly adopt modular designs that decompose agent functionality into interchangeable components. Research from ICML 2025 on “General Modular Harness for LLM Agents in Multi-Turn Gaming Environments” demonstrates this approach through three core modules: perception, which processes both low-resolution grid environments and visually complex images; memory, which stores recent trajectories and synthesises self-reflection signals enabling agents to critique past moves and adjust future plans; and reasoning, which integrates perceptual embeddings and memory traces to produce sequential decisions. This modular structure allows developers to toggle components on and off, systematically analysing each module’s contribution to overall performance.

Performance Impact and Practical Benefits

The empirical benefits of harness implementation are substantial. Models operating within a harness achieve significantly higher task success rates compared to un-harnessed baselines. In gaming environments, an AI with a memory and perception harness wins more games than the same AI without one. In coding tasks, an AI with a harness that runs and debugs its own code completes programming tasks that a standalone LLM would fail due to runtime errors. The harness essentially compensates for the model’s inherent weaknesses-lack of persistence, inability to access external knowledge, and propensity for errors-resulting in markedly improved real-world performance.

Perhaps most significantly, harnesses extend what an AI can accomplish without requiring model retraining. Want an LLM to handle images? Integrate a vision module or image captioning API into the harness. Need mathematical reasoning or complex logic? Add the appropriate tool or module. This extensibility makes harnesses economically valuable: two products using identical underlying LLMs can deliver vastly different user experiences based on the quality and sophistication of their respective harnesses.

Evolution and Strategic Importance

As AI capabilities have advanced, harness design has become increasingly critical to product success. The harness landscape is dynamic and evolving: popular agents like Manus have undergone five complete re-architectures since March 2024, and even Anthropic continuously refines Claude Code’s agent harness as underlying models improve. This reflects a fundamental principle: as models become more capable, harnesses must be continually simplified, stripping away scaffolding and crutches that are no longer necessary.

The distinction between orchestration and harness is worth noting. Orchestration serves as the “brain” of an AI system-determining the overall workflow and decision logic-whilst the harness functions as the “hands and infrastructure,” executing those decisions and managing the technical details. Both are critical for complex AI agents, and improvements in either dimension can dramatically enhance real-world performance.

Related Theorist: Allen Newell and Cognitive Architecture

Allen Newell (1927-1992) was an American cognitive scientist and computer scientist whose theoretical framework profoundly influences contemporary harness design. Newell’s “Unified Theories of Cognition” (UTC), published in 1990, proposed that human cognition operates through integrated systems of perception, memory, and reasoning-three faculties that work in concert to enable intelligent behaviour. This theoretical foundation directly inspired the modular harness architectures now prevalent in AI research.

Newell’s career spanned the emergence of cognitive science as a discipline. Working initially at the RAND Corporation and later at Carnegie Mellon University, he collaborated with Herbert Simon to develop the “Physical Symbol System Hypothesis,” which posited that physical symbol systems (such as computers) could exhibit intelligent behaviour through the manipulation of symbols according to rules. This work earned Newell and Simon the Turing Award in 1975, recognising their foundational contributions to artificial intelligence.

Newell’s UTC represented his mature synthesis of decades of research into human problem-solving, learning, and memory. Rather than treating perception, memory, and reasoning as separate cognitive modules, Newell argued they must be understood as deeply integrated systems operating within a unified cognitive architecture. This insight proved prescient: modern AI harnesses implement precisely this integration, with perception modules processing environmental information, memory modules storing and retrieving relevant context, and reasoning modules synthesising these inputs into coherent action sequences.

The connection between Newell’s theoretical work and contemporary harness design is not merely coincidental. Researchers explicitly cite Newell’s framework when justifying modular harness architectures, recognising that his cognitive science insights provide a principled foundation for engineering AI systems. In this sense, Newell’s work from the 1980s and early 1990s anticipated the architectural requirements that AI engineers would discover empirically decades later when attempting to build capable, persistent, tool-using agents.

References

1. https://parallel.ai/articles/what-is-an-agent-harness

2. https://developer.harness.io/docs/platform/harness-aida/aida-overview

3. https://arxiv.org/html/2507.11633v1

4. https://hugobowne.substack.com/p/ai-agent-harness-3-principles-for

5. https://dxwand.com/boost-business-ai-harness-llms-nlp-nlu/

6. https://www.anthropic.com/engineering/effective-harnesses-for-long-running-agents

"A harness (often called an agent harness or agentic harness) is an external software framework that wraps around a Large Language Model (LLM) to make it functional, durable, and capable of taking actions in the real world." - Term: AI harness

read more
Quote: Clayton Christensen

Quote: Clayton Christensen

“When I have my interview with God, our conversation will focus on the individuals whose self-esteem I was able to strengthen, whose faith I was able to reinforce, and whose discomfort I was able to assuage – a doer of good, regardless of what assignment I had. These are the metrics that matter in measuring my life.” – Clayton Christensen – Author

Clayton M. Christensen, the renowned Harvard Business School professor and author, encapsulated a lifetime of reflection in this poignant reflection on true success. Drawn from his seminal book How Will You Measure Your Life?, published in 2012, the quote emerges from Christensen’s classroom exercise where he challenged students to confront life’s deepest questions: How can I ensure happiness in my career? How can I nurture enduring family relationships? And how can I avoid moral pitfalls that lead to downfall?1,2,3

Christensen’s Life and Intellectual Journey

Born in 1952 in Salt Lake City, Utah, Christensen rose from humble roots to become one of the most influential management thinkers of his generation. A devout member of The Church of Jesus Christ of Latter-day Saints, he infused his work with ethical considerations, often drawing parallels between business strategy and personal integrity. He earned a DBA from Harvard Business School in 1992, where he later became the Kim B. Clark Professor of Business Administration.3,7

Christensen’s breakthrough came with The Innovator’s Dilemma (1997), which introduced the theory of disruptive innovation – the idea that established companies often fail by focusing on high-margin customers while upstarts target overlooked markets, eventually upending incumbents. This concept, praised by Steve Jobs as deeply influential, transformed how leaders view competition and change.2 His ideas permeated industries, from technology to healthcare, earning him accolades like the Economist Innovation Award.

Tragedy struck in 2010 when Christensen was diagnosed with leukemia, prompting deeper introspection. Amid treatments, he expanded his final HBS class into How Will You Measure Your Life?, co-authored with James Allworth and Karen Dillon. The book applies rigorous business theories – like marginal cost analysis and resource allocation – to life’s choices, warning against ‘just this once’ compromises that erode integrity over time.3,7 Christensen passed away in 2020, but his emphasis on relationships over achievements endures.

Context of the Quote in ‘How Will You Measure Your Life?’

The quote anchors the book’s core thesis: conventional metrics like wealth or status pale against the impact on others’ lives. Christensen recounted posing these questions to ambitious MBAs, urging them to invest deliberately in relationships, as career peaks fade but personal bonds provide lasting happiness.1,4 He illustrated pitfalls through cases like Nick Leeson, whose minor ethical lapse at Barings Bank spiralled into fraud and ruin, underscoring that 100% adherence to principles is easier than 98%.3

In sections on career and relationships, Christensen advised balancing ambition with family time, using ‘jobs to be done’ theory: people ‘hire’ you for specific roles, like parents modelling values or partners providing support. At life’s end, he argued, success lies in friends who console you, children embodying your values, and a resilient marriage – not accolades.4,5

Leading Theorists on Life Priorities and Fulfilment

Christensen built on a lineage of thinkers prioritising inner metrics over external gains:

  • Viktor Frankl, Holocaust survivor and author of Man’s Search for Meaning (1946), posited that fulfilment stems from purpose and love, not pleasure – influencing Christensen’s focus on meaningful impact.3
  • Abraham Maslow‘s hierarchy of needs culminates in self-actualisation, where self-esteem and relationships foster peak experiences, aligning with Christensen’s relational emphasis.4
  • Martin Seligman, father of positive psychology, advocated measuring life via PERMA (Positive Emotion, Engagement, Relationships, Meaning, Accomplishment), reinforcing that relationships yield the highest wellbeing.2
  • Daniel Kahneman, Nobel laureate, distinguished ‘experiencing self’ (daily highs) from ‘remembering self’ (enduring memories), cautioning that peak achievements matter less retrospectively than sustained bonds.3

These theorists converge on a truth Christensen championed: true leadership – in business or life – measures by upliftment of others, not personal ascent. His framework equips readers to audit priorities, ensuring actions align with eternal metrics of good.1,7

References

1. https://www.ricklindquist.com/notes/how-will-you-measure-your-life

2. https://www.porchlightbooks.com/products/how-will-you-measure-your-life-clayton-m-christensen-9780062102416

3. https://www.library.hbs.edu/working-knowledge/clayton-christensens-how-will-you-measure-your-life

4. https://www.youtube.com/watch?v=qCX6vAvglAI

5. https://chools.in/wp-content/uploads/2021/03/HOW-WILL-YOU-MEASURE-YOUR-LIFE.pdf

6. https://www.deseretbook.com/product/5083635.html

7. https://hbr.org/2010/07/how-will-you-measure-your-life

8. https://www.barnesandnoble.com/w/how-will-you-measure-your-life-clayton-m-christensen/1111558923

“When I have my interview with God, our conversation will focus on the individuals whose self-esteem I was able to strengthen, whose faith I was able to reinforce, and whose discomfort I was able to assuage - a doer of good, regardless of what assignment I had. These are the metrics that matter in measuring my life.” - Quote: Clayton Christensen

read more
Term: Loss function

Term: Loss function

“A loss function, also known as a cost function, is a mathematical function that quantifies the difference between a model’s predicted output and the actual ‘ground truth’ value for a given input.” – Loss function

A loss function is a mathematical function that quantifies the discrepancy between a model’s predicted output and the actual ground truth value for a given input. Also referred to as an error function or cost function, it serves as the objective function that machine learning and artificial intelligence algorithms seek to optimize during training efforts.

Core Purpose and Function

The loss function operates as a feedback mechanism within machine learning systems. When a model makes a prediction, the loss function calculates a numerical value representing the prediction error-the gap between what the model predicted and what actually occurred. This error quantification is fundamental to the learning process. During training, algorithms such as backpropagation use the gradient of the loss function with respect to the model’s parameters to iteratively adjust weights and biases, progressively reducing the loss and improving predictive accuracy.

The relationship between loss function and cost function warrants clarification: whilst these terms are often used interchangeably, a loss function technically applies to a single training example, whereas a cost function typically represents the average loss across an entire dataset or batch. Both, however, serve the same essential purpose of guiding model optimization.

Key Roles in Machine Learning

Loss functions fulfil several critical functions within machine learning systems:

  • Performance measurement: Loss functions provide a quantitative metric to evaluate how well a model’s predictions align with actual results, enabling objective assessment of model effectiveness.
  • Optimization guidance: By calculating prediction error, loss functions direct the learning algorithm to adjust parameters iteratively, creating a clear path toward improved predictions.
  • Bias-variance balance: Effective loss functions help balance model bias (oversimplification) and variance (overfitting), essential for generalisation to new, unseen data.
  • Training signal: The gradient of the loss function provides the signal by which learning algorithms update model weights during backpropagation.

Common Loss Function Types

Different machine learning tasks require different loss functions. For regression problems involving continuous numerical predictions, Mean Squared Error (MSE) and Mean Absolute Error (MAE) are widely employed. The MAE formula is:

\text{MAE} = \frac{1}{n} \sum_{i=1}^{n} \left| y_i - \hat{y}_i \right|

For classification tasks dealing with categorical data, Binary Cross-Entropy (also called Log Loss) is commonly used for binary classification problems. The formula is:

L(y, f(x)) = -[y \cdot \log(f(x)) + (1 - y) \cdot \log(1 - f(x))]

where y represents the true binary label (0 or 1) and f(x) is the predicted probability of the positive class.

For multi-class classification, Categorical Cross-Entropy extends this concept. Additionally, Hinge Loss is particularly useful in binary classification where clear separation between classes is desired:

L(y, f(x)) = \max(0, 1 - y \cdot f(x))

The Huber Loss function provides robustness to outliers by combining quadratic and linear components, switching between them based on a threshold parameter delta (?).

Related Strategy Theorist: Vladimir Vapnik

Vladimir Naumovich Vapnik (born 1935) stands as a foundational figure in the theoretical underpinnings of loss functions and machine learning optimisation. A Soviet and later American computer scientist, Vapnik’s work on Statistical Learning Theory and Support Vector Machines (SVMs) fundamentally shaped how the machine learning community understands loss functions and their role in model generalisation.

Vapnik’s most significant contribution to loss function theory came through his development of Support Vector Machines in the 1990s, where he introduced the concept of the hinge loss function-a loss function specifically designed to maximise the margin between classification boundaries. This represented a paradigm shift in thinking about loss functions: rather than simply minimising prediction error, Vapnik’s approach emphasised confidence and margin, ensuring models were not merely correct but confidently correct by a specified distance.

Born in the Soviet Union, Vapnik studied mathematics at the University of Uzbekistan before joining the Institute of Control Sciences in Moscow, where he conducted groundbreaking research on learning theory. His theoretical framework, Vapnik-Chervonenkis (VC) theory, provided mathematical foundations for understanding how models generalise from training data to unseen examples-a concept intimately connected to loss function design and selection.

Vapnik’s insight that different loss functions encode different assumptions about what constitutes “good” model behaviour proved revolutionary. His work demonstrated that the choice of loss function directly influences not just training efficiency but the model’s ability to generalise. This principle remains central to modern machine learning: data scientists select loss functions strategically to encode domain knowledge and desired model properties, whether robustness to outliers, confidence in predictions, or balanced handling of imbalanced datasets.

Vapnik’s career spanned decades of innovation, including his later work on transductive learning and learning using privileged information. His theoretical contributions earned him numerous accolades and established him as one of the most influential figures in machine learning science. His emphasis on understanding the mathematical foundations of learning-particularly through the lens of loss functions and generalisation bounds-continues to guide contemporary research in deep learning and artificial intelligence.

Practical Significance

The selection of an appropriate loss function significantly impacts model performance and training efficiency. Data scientists carefully consider different loss functions to achieve specific objectives: reducing sensitivity to outliers, better handling noisy data, minimising overfitting, or improving performance on imbalanced datasets. The loss function thus represents not merely a technical component but a strategic choice that encodes domain expertise and learning objectives into the machine learning system itself.

References

1. https://www.datacamp.com/tutorial/loss-function-in-machine-learning

2. https://h2o.ai/wiki/loss-function/

3. https://c3.ai/introduction-what-is-machine-learning/loss-functions/

4. https://www.geeksforgeeks.org/machine-learning/ml-common-loss-functions/

5. https://arxiv.org/html/2504.04242v1

6. https://www.youtube.com/watch?v=v_ueBW_5dLg

7. https://www.ibm.com/think/topics/loss-function

8. https://en.wikipedia.org/wiki/Loss_function

9. https://www.datarobot.com/blog/introduction-to-loss-functions/

"A loss function, also known as a cost function, is a mathematical function that quantifies the difference between a model's predicted output and the actual 'ground truth' value for a given input." - Term: Loss function

read more
Quote: Clayton Christensen

Quote: Clayton Christensen

“The only metrics that will truly matter to my life are the individuals whom I have been able to help, one by one, to become better people.” – Clayton Christensen – Author

Clayton Christensen’s assertion that personal impact-measured through the individuals we help develop-represents the truest metric of a life well-lived stands as a profound counterpoint to the achievement-obsessed culture that dominates modern professional life. This reflection emerges not from abstract philosophy but from decades of observing how talented, ambitious people construct meaning, and from Christensen’s own wrestling with what constitutes genuine success.

The Context: A Harvard Professor’s Reckoning

Christensen, the Thomas Bowers Professor of Business Administration at Harvard Business School and author of the seminal work The Innovator’s Dilemma, developed this perspective through direct engagement with some of the world’s most driven individuals: MBA students at one of the planet’s most competitive institutions. Each year, he posed three deceptively simple questions to his students on the final day of class: How can I be sure I’ll be happy in my career? How can I be sure my relationships with family become an enduring source of happiness? How can I be sure I’ll stay out of jail?

These questions, which form the foundation of his 2012 book How Will You Measure Your Life? (co-authored with James Allworth and Karen Dillon), reveal Christensen’s conviction that conventional metrics of success-wealth, title, achievement-systematically mislead us about what actually generates lasting fulfilment. The book, published by Harper Business, synthesises decades of academic research with personal narrative to argue that well-tested theories from business and psychology can illuminate the path to a meaningful life.

The Danger of Marginal Thinking

Central to Christensen’s argument is his critique of how marginal-cost analysis-a cornerstone of business decision-making-infiltrates personal life with corrosive consequences. He illustrates this through the cautionary tale of Nick Leeson, the trader whose “just this once” decisions ultimately destroyed Barings Bank, a 233-year-old institution, and landed him in prison. Leeson’s descent began with a single small error, hidden in a little-scrutinised trading account. Each subsequent deception seemed a marginal step, yet the cumulative effect was catastrophic.

Christensen argues that we unconsciously apply this same logic to our personal and moral lives. A voice whispers: “I know most people shouldn’t do this, but in this particular extenuating circumstance, just this once, it’s okay.” The price appears alluringly low. Yet life, Christensen observes, presents an endless stream of extenuating circumstances. Once we justify crossing a boundary once, nothing prevents us from crossing it again. The boundary itself-our personal moral line-loses its power.

This insight directly connects to his central claim about measuring life through human development. If we measure success by quarterly results, promotions, or wealth accumulation, we unconsciously permit ourselves small moral compromises that seem justified by marginal analysis. But if we measure success by the individuals we’ve genuinely helped become better people, our decision-making framework shifts entirely. Helping someone develop requires consistency, integrity, and long-term commitment-qualities incompatible with marginal thinking.

The Theoretical Foundations

Christensen’s perspective draws on several streams of organisational and psychological theory. His work on innovation theory-developed through The Innovator’s Dilemma, which Steve Jobs described as “deeply influencing” Apple’s strategy-emphasises how organisations often fail by optimising for present circumstances rather than building capabilities for future challenges. This same principle applies to personal development: we often optimise for immediate achievement rather than building the relational and moral capabilities that sustain meaning across decades.

The book also engages with motivation theory, particularly the distinction between intrinsic and extrinsic motivators. Research in psychology, notably the work of Edward Deci and Richard Ryan on self-determination theory, demonstrates that extrinsic rewards (money, status, recognition) provide temporary satisfaction but rarely generate enduring happiness. Intrinsic motivators-autonomy, mastery, and purpose-create deeper engagement and fulfilment. Christensen argues that helping others develop satisfies all three intrinsic motivators: you exercise agency in how you mentor, you develop mastery in your field, and you connect to a purpose beyond yourself.

Additionally, Christensen draws on research in positive psychology and life satisfaction studies. Longitudinal research, including the Harvard Study of Adult Development (which tracked individuals across decades), consistently demonstrates that the quality of relationships-not career achievement or wealth-predicts life satisfaction and longevity. Christensen synthesises this research with business theory to argue that the mechanism through which relationships generate happiness is precisely through the mutual development of the individuals involved.

The Concept of Being “Hired”

A distinctive element of Christensen’s framework is his concept of being “hired” to do a job in someone’s life. Rather than viewing relationships as passive connections, he suggests we should understand them as ongoing engagements where others, implicitly or explicitly, hire us to fulfil specific roles: mentor, example, confidant, supporter. This reframing transforms how we approach relationships. If your child has hired you to be an example of integrity, your daily choices take on different weight. If your colleague has hired you to help them develop their capabilities, your mentoring becomes a central measure of your professional contribution.

This concept echoes the work of Clayton Alderfer and other organisational psychologists who emphasise the importance of role clarity and psychological contracts in generating satisfaction. But Christensen extends it beyond the workplace into all human relationships, suggesting that clarity about what role we’re playing-and commitment to excellence in that role-generates both happiness for ourselves and genuine development for others.

The Paradox of Achievement

Christensen acknowledges a subtle paradox: those with strong achievement drives-precisely the individuals most likely to attend Harvard Business School-face particular risk. Their ambition, which drives professional success, can simultaneously blind them to what generates lasting happiness. He recounts a personal moment when, as a young man, he faced a choice between attending an important basketball game (where his team needed him) and pursuing a business opportunity. He chose the game, reasoning that his team needed him. They won anyway without him. Yet he later recognised this decision as among the most important of his life-not because of the game’s outcome, but because it established a boundary: relationships matter more than marginal professional gains.

This reflects research on what psychologists call the “arrival fallacy”-the discovery that achieving long-sought goals often fails to generate the anticipated happiness. Christensen argues this occurs because achievement-focused individuals have internalised the wrong metric. They measure success by what they accomplish, when they should measure it by who they’ve helped become.

Implications for Leadership and Mentorship

For leaders and managers, Christensen’s framework suggests a radical reorientation of purpose. Rather than viewing your role primarily through the lens of organisational performance, financial results, or strategic objectives, you might ask: which individuals have I genuinely helped develop? Have I created conditions where they’ve grown in capability, confidence, and character? This doesn’t negate the importance of business results-Christensen emphasises that career provides stability and resources to give to others. But it reorders priorities.

This perspective aligns with contemporary research on authentic leadership and servant leadership, which emphasises that leaders generate the greatest impact-both organisational and personal-when they prioritise the development of those they lead. Research by scholars like James Kouzes and Barry Posner demonstrates that leaders remembered as transformational are those who invested in developing others, not merely those who achieved impressive financial results.

The Long View

Christensen’s metric requires patience and a long temporal horizon. You won’t know if you’ve raised a good son or daughter until twenty years after the bulk of your parenting work. You won’t know if you have true friends until they call to console you during genuine hardship. You won’t know if you’ve built an enduring marriage until you’ve navigated the challenges that cause many relationships to fracture. This stands in sharp contrast to the quarterly earnings reports, annual performance reviews, and immediate feedback loops that dominate modern professional life.

Yet this long view, Christensen argues, is precisely what liberates us from marginal thinking. When you recognise that the true measure of your life will be assessed across decades, the temptation to compromise your principles “just this once” loses its power. The small decision to help someone develop, made consistently over years, compounds into a life of genuine impact. Conversely, the small decision to prioritise marginal professional gain over relational investment, repeated across years, compounds into a life of hollow achievement.

Christensen’s insight ultimately suggests that the question “How will you measure your life?” is not merely philosophical but profoundly practical. It shapes daily decisions about where you invest your time, energy, and integrity. And those daily decisions, accumulated across a lifetime, determine not just your happiness but the legacy you leave: the individuals who became better people because you were present in their lives.

References

1. https://www.ricklindquist.com/notes/how-will-you-measure-your-life

2. https://www.porchlightbooks.com/products/how-will-you-measure-your-life-clayton-m-christensen-9780062102416

3. https://www.library.hbs.edu/working-knowledge/clayton-christensens-how-will-you-measure-your-life

4. https://www.youtube.com/watch?v=qCX6vAvglAI

5. https://chools.in/wp-content/uploads/2021/03/HOW-WILL-YOU-MEASURE-YOUR-LIFE.pdf

6. https://www.deseretbook.com/product/5083635.html

7. https://hbr.org/2010/07/how-will-you-measure-your-life

8. https://www.barnesandnoble.com/w/how-will-you-measure-your-life-clayton-m-christensen/1111558923

“The only metrics that will truly matter to my life are the individuals whom I have been able to help, one by one, to become better people.” - Quote: Clayton Christensen

read more
Term: AI scaffolding

Term: AI scaffolding

“Scaffolding refers to the structured architecture and instructional techniques built around an AI model to enhance its reasoning, reliability, and capability.” – AI scaffolding

AI scaffolding is the structured architecture and tooling built around a large language model (LLM) to enable it to perform complex, goal-driven tasks with enhanced reasoning, reliability, and capability.1 Rather than relying on a single prompt or query, scaffolding places an LLM within a control loop that includes memory systems, external tools, decision logic, and feedback mechanisms, allowing the model to observe its environment, call APIs or code, update its context, and iterate until goals are achieved.1

In essence, scaffolding bridges the critical gap between the capabilities of base models and production-ready systems. A standalone LLM lacks the architectural support needed to reliably complete multi-step tasks, interface with business systems, or adapt to domain-specific requirements.1 Scaffolding augments the model’s bare capabilities by providing access to tools, domain data, and structured workflows that guide and extend its behaviour.

Core Components of AI Scaffolding

Effective scaffolding operates through several interconnected layers:

  • Planning and reasoning: Agents operate through defined reasoning and evaluation steps. Rather than acting immediately, scaffolding may prompt the model to plan or reflect before taking action, and to self-critique its outputs. Research demonstrates that allowing agents to plan and self-evaluate significantly improves problem-solving accuracy compared to action-only approaches.1
  • Tool integration: The LLM is wrapped in code that interprets its outputs as tool calls. When the model determines it needs external resources-such as a calculator, database query, API call, or web search-the scaffold safely executes that tool and returns results to the model for the next reasoning step.1
  • Memory systems: Scaffolding includes mechanisms for the agent to maintain and update context across multiple interactions, enabling it to build upon previous observations and decisions.1
  • Feedback and control: Robust agents include feedback loops and safeguards such as self-evaluation steps, human-in-the-loop checks, and policy enforcement. In enterprise settings, scaffolding adds logging, testing suites, and guardrails like content filters to ensure outputs remain controlled and auditable.1

Types of AI Scaffolding Techniques

AI scaffolding encompasses several distinct approaches, which can be combined to enhance model performance:

  • Tool access scaffolding: Granting models access to external tools such as code editors, web browsers, or specialised software significantly expands their problem-solving capabilities. For example, LLMs initially trained on finite datasets with fixed cut-off dates became substantially more capable when granted internet access.2
  • Agent loop scaffolding: This technique automates multi-step task completion by placing AI models in a loop with access to their own observations and actions, enabling them to self-generate each prompt needed to finish complex tasks. Systems like AutoGPT exemplify this approach.2
  • Multi-agent scaffolding: Multiple AI models collaborate on complex problems through dialogue, division of labour, or critique mechanisms. Research shows that extended networks of up to a thousand agents can coordinate to outperform individual models, with capability scaling predictably as networks grow larger.2
  • Procedural scaffolding: This approach builds a structured process in which the model generates outputs, checks them, and revises them iteratively, enforcing process discipline rather than relying on raw prompts alone.3
  • Semantic scaffolding: Using ontological frameworks and domain rules to validate outputs against formal relations, preventing deeper misunderstandings and moving AI closer to auditable, trustworthy reasoning.3

Practical Applications and Enterprise Use

Scaffolding is essential for operationalising LLMs in enterprise environments. Whether an agent is expected to generate structured outputs, interact with APIs, or solve problems through planning and iteration, its effectiveness depends on the scaffold that guides and extends its behaviour.1 In sectors such as customer service, risk analysis, logistics, healthcare, and finance, scaffolding enables AI systems to maintain reliability and auditability in high-stakes contexts.3

A key advantage of scaffolding is that it improves accuracy whilst making AI reasoning more transparent. When a system reaches a conclusion, leaders can trace it back to formal relations in an ontology rather than relying solely on statistical inference, making the system trustworthy for critical applications.3

Scaffolding versus Model Scale

An important principle in modern AI development is that scaffolding often matters more than raw model scale. The future of AI-whether in homeland security, finance, healthcare, or other domains-will be defined not by the size of models but by the quality of the architectural frameworks surrounding them.3 Hybrid architectures that embed statistical models within well-designed scaffolded systems deliver superior performance and reliability compared to simply scaling larger models without structural support.

Key Theorist: Stuart Russell and the Alignment Research Tradition

The conceptual foundations of AI scaffolding are deeply rooted in the work of Stuart Russell, a leading figure in artificial intelligence safety and alignment research. Russell, the Volgenau Chair of Engineering at the University of California, Berkeley, and co-author of the seminal textbook Artificial Intelligence: A Modern Approach, has been instrumental in developing frameworks for ensuring AI systems remain controllable and aligned with human values as they become more capable.

Russell’s contributions to scaffolding theory emerge from his broader research agenda on AI safety and the control problem. In the early 2000s, as machine learning systems began to demonstrate increasing autonomy, Russell recognised that simply building more powerful models without corresponding advances in control architecture would create dangerous misalignment between AI capabilities and human oversight. His work emphasised that the architecture surrounding an AI system-not merely the model itself-determines whether that system can be safely deployed in high-stakes environments.

One of Russell’s most influential contributions to scaffolding concepts is his work on iterated amplification, developed in collaboration with researchers at OpenAI and other institutions. Iterated amplification is a form of scaffolding that uses multi-AI collaborations to solve increasingly complex problems whilst maintaining human oversight at each stage. In this approach, humans decompose complex tasks into simpler subtasks that AI systems solve, then humans review and synthesise these solutions. Over time, humans operate at progressively higher levels of abstraction whilst AI systems assume responsibility for more of the process. This iterative cycle improves model capabilities whilst preserving human auditability and control-a principle directly aligned with scaffolding’s core objective.2

Russell’s broader philosophical stance is that AI safety and capability enhancement are not opposing forces but complementary objectives. Scaffolding embodies this principle: by building structured architectures around models, developers simultaneously enhance capability (through tool access, planning, and feedback loops) and improve safety (through auditability, human-in-the-loop checks, and formal validation against domain rules). Russell’s insistence that AI systems must remain interpretable and auditable has directly influenced how modern scaffolding frameworks incorporate semantic validation, ontological constraints, and transparent reasoning pathways.

Throughout his career, Russell has advocated for what he terms “beneficial AI”-systems designed from inception to be controllable, transparent, and aligned with human values. Scaffolding represents a practical instantiation of this vision. Rather than hoping that larger models will somehow become more trustworthy, Russell’s framework suggests that intentional architectural design-the very essence of scaffolding-is the path to AI systems that are simultaneously more capable and more reliable.

Russell’s influence extends beyond theoretical work. His research group at Berkeley has contributed to developing practical frameworks for AI governance, model evaluation, and safety testing that directly inform how organisations implement scaffolding in production environments. His emphasis on formal methods, constraint satisfaction, and human-AI collaboration has shaped industry standards for building enterprise-grade AI systems.

References

1. https://zbrain.ai/agent-scaffolding/

2. https://blog.bluedot.org/p/what-is-ai-scaffolding

3. https://www.cio.com/article/4076515/beyond-ai-prompts-why-scaffolding-matters-more-than-scale.html

4. https://www.godofprompt.ai/blog/what-is-prompt-scaffolding

5. https://kpcrossacademy.ua.edu/scaffolding-ai-as-a-learning-collaborator-integrating-artificial-intelligence-in-college-classes/

6. https://www.tandfonline.com/doi/full/10.1080/10494820.2025.2470319

"Scaffolding refers to the structured architecture and instructional techniques built around an AI model to enhance its reasoning, reliability, and capability." - Term: AI scaffolding

read more
Quote: Clayton Christensen

Quote: Clayton Christensen

“I had thought the destination was what was important, but it turned out it was the journey.” – Clayton Christensen – Author

Clayton M. Christensen, the renowned Harvard Business School professor and author, encapsulated a profound shift in perspective with this reflection from his seminal work How Will You Measure Your Life? Published in 2010, the book draws on his business theories to offer timeless guidance on personal fulfilment, urging readers to prioritise meaningful processes over mere endpoints in life and career.1,2

Who Was Clayton Christensen?

Born in 1952 in Salt Lake City, Utah, Christensen rose from humble beginnings to become one of the most influential thinkers in modern business. A devout member of The Church of Jesus Christ of Latter-day Saints, he integrated his faith with rigorous scholarship. He earned a BA from Brigham Young University, an MPhil from Oxford as a Rhodes Scholar, and both an MBA and DBA from Harvard Business School.

Christensen’s breakthrough came with The Innovator’s Dilemma (1997), introducing **disruptive innovation** – the theory that established companies often fail by focusing on high-end customers, allowing nimble entrants to dominate lower markets and eventually upscale.3 This framework reshaped industries like technology and healthcare. He authored over a dozen books, consulted for global firms, and taught at Harvard for decades until his death in January 2020 from complications of leukemia.

Despite professional acclaim, Christensen’s later years emphasised personal integrity. He famously resisted ‘just this once’ compromises, a principle he credited for his life’s direction: ‘Resisting the temptation whose logic was ‘In this extenuating circumstance, just this once, it’s OK’ has proven to be one of the most important decisions of my life.’3,6

Context of the Quote in How Will You Measure Your Life?

The book stems from Christensen’s 2010 Harvard MBA commencement address, expanded into chapters blending business strategy with life lessons. He warns against common traps: chasing rewards that scream loudest, neglecting family for career, or measuring success by wealth alone. Instead, he advocates allocating resources – time, energy, talent – towards aspirations.4,5,6

This quote emerges in discussions of motivation and growth. Christensen reflects that true satisfaction arises not from arriving at goals, but from the daily pursuit of meaningful work, learning, and relationships. He writes: ‘In order to really find happiness, you need to continue looking for opportunities that you believe are meaningful, in which you will be able to learn new things, to succeed, and be given more and more responsibility to shoulder.’3,4 The journey, rich with motivators like progress and teamwork, forges character and joy.

Leading Theorists on Life Priorities and the Journey Metaphor

Christensen’s insight echoes ancient and modern thinkers who elevate process over outcome.

  • Aristotle (384-322 BC): In Nicomachean Ethics, he defined eudaimonia (flourishing) as a life of virtuous activity, not transient pleasures. Habits formed in daily practice, not endpoints, cultivate excellence.
  • Lao Tzu (6th century BC): The Tao Te Ching states, ‘A journey of a thousand miles begins with a single step.’ Taoist philosophy prizes harmonious flow (wu wei) over forced achievement.
  • Viktor Frankl (1905-1997): Holocaust survivor and Man’s Search for Meaning author argued meaning emerges through attitude amid suffering. Logotherapy posits purpose in every moment’s choices, prioritising inner journey.
  • Mihaly Csikszentmihalyi (1934-2021): Pioneer of **flow theory** in Flow: The Psychology of Optimal Experience (1990). Peak experiences occur in immersive tasks matching skill and challenge – the essence of valuing journey.
  • Daniel Kahneman (1934-2024): Nobel-winning psychologist distinguished ‘experiencing self’ (moment-to-moment) from ‘remembering self’ (end results). In Thinking, Fast and Slow, he showed people often overvalue peaks and endpoints, neglecting the journey’s sum.

These theorists converge on Christensen’s message: life’s value lies in intentional, principle-driven paths. As he noted, ‘The only metrics that will truly matter to my life are the individuals whom I have been able to help, one by one, to become better people.’3,5

Enduring Relevance

In an era of hustle culture and metric-driven success, Christensen’s words challenge us to recalibrate. His life exemplified this: battling illness while mentoring students, he measured legacy by impact, not accolades. This quote invites reflection – are we journeying with purpose, or merely racing to destinations that may disappoint?

References

1. https://quotefancy.com/quote/1849082/Clayton-M-Christensen-I-had-thought-the-destination-was-what-was-important-but-it-turned

2. https://www.goodreads.com/quotes/6847238-i-had-thought-the-destination-was-what-was-important-but

3. https://www.toolshero.com/toolsheroes/clayton-christensen/

4. https://www.club255.com/p/book-byte-98-how-will-you-measure

5. https://rochemamabolo.wordpress.com/2017/11/26/book-review-how-will-you-measure-your-life-by-clayton-christensen/

6. https://www.goodreads.com/author/quotes/1792.Clayton_M_Christensen

7. https://www.claudioperfetti.com/all/how-will-you-measure-your-life/

8. https://quirky-quests.com/ls-clayton-christensen/

“I had thought the destination was what was important, but it turned out it was the journey.” - Quote: Clayton Christensen

read more
Quote: Brian Moynihan – Bank of America CEO

Quote: Brian Moynihan – Bank of America CEO

“You can see upwards of $6 trillion in deposits flow off the liabilities of a banking system… into the stablecoin environment… they’re either not going to be able to loan or they’re going to have to get wholesale funding and that wholesale funding will come at a cost that will increase the cost of borrowing.” – Brian Moynihan – Bank of America CEO

In the rapidly evolving landscape of digital finance, Brian Moynihan, CEO of Bank of America, issued a stark warning during the bank’s Q4 2025 earnings call on 15 January 2026. He highlighted the potential for up to $6 trillion in deposits – roughly 30% to 35% of total US commercial bank deposits – to shift from traditional banking liabilities into the stablecoin ecosystem if regulators permit stablecoin issuers to pay interest.1,2

Context of the Quote

Moynihan’s comments arose amid intense legislative debates over stablecoin regulation in the United States. With US commercial bank deposits standing at $18.61 trillion in January 2026 and the stablecoin market capitalisation at just $315 billion, the scale of this projected outflow underscores a profound threat to the fractional reserve banking model.1 Banks rely on low-cost customer deposits to fund loans to households and businesses, especially small and mid-sized enterprises. A mass migration to interest-bearing stablecoins would cripple lending capacity or force reliance on pricier wholesale funding, thereby elevating borrowing costs across the economy.1,2

This concern echoes broader industry pushback. Executives from JPMorgan and Bank of America have criticised proposals allowing stablecoin yields or rewards, viewing them as direct competition. A US Senate bill aimed at formalising cryptocurrency regulation has stalled amid lobbying from the American Bankers Association, which seeks to prohibit interest on stablecoins. Meanwhile, the GENIUS Act, signed by President Donald Trump in July 2025, marked the first explicit crypto legislation, spurring financial institutions to enter the space while intensifying turf wars as crypto firms pursue banking charters.3

Who is Brian Moynihan?

Brian Moynihan has led Bank of America since January 2010, steering the institution through post-financial crisis recovery, digital transformation, and now the crypto challenge. A Harvard Law graduate with a prior stint at FleetBoston Financial, Moynihan expanded BofA’s wealth management and consumer banking arms, growing assets to over $3 trillion. His tenure has emphasised regulatory compliance and innovation, yet he remains vocal on threats like stablecoins that could disrupt deposit stability.1,2

Backstory on Leading Theorists in Stablecoins and Banking Disruption

The stablecoin phenomenon builds on foundational ideas from monetary theorists and crypto pioneers who envisioned programmable money challenging centralised banking.

  • Satoshi Nakamoto: The pseudonymous creator of Bitcoin in 2008 laid the groundwork by introducing decentralised digital currency, free from central bank control. Bitcoin’s volatility spurred stablecoins as a bridge to everyday use.1
  • Vitalik Buterin: Ethereum’s co-founder (2015) enabled smart contracts, powering algorithmic stablecoins like DAI. Buterin’s vision of decentralised finance (DeFi) posits stablecoins as superior stores of value with yields from on-chain protocols, bypassing banks.3
  • Milton Friedman: The Nobel laureate’s 1969 proposal for a computer-based money system with fixed supply prefigured stablecoins. Friedman argued such systems could curb inflation better than fiat, influencing modern dollar-pegged tokens like USDT and USDC.1
  • Hayek and Free Banking Theorists: Friedrich Hayek’s Denationalisation of Money (1976) advocated competing private currencies, a concept realised in stablecoins issued by firms like Tether and Circle. This challenges the state’s monopoly on money issuance.3
  • Crypto Economists like Jeremy Allaire (Circle CEO): Allaire champions stablecoins as ‘internet-native money’ for payments and remittances, arguing they offer efficiency banks cannot match. His firm issues USDC, now integral to global transfers.1,3

These thinkers collectively argue that stablecoins democratise finance, offering transparency, yield, and borderless access. Yet banking leaders like Moynihan counter that without safeguards, this shift risks systemic instability by eroding the deposit base that fuels economic growth.2

Implications for Finance

Moynihan’s forecast spotlights a pivotal regulatory crossroads. Permitting interest on stablecoins could accelerate adoption, potentially reshaping payments, lending, and funding markets. Banks lobby for restrictions to preserve their model, while crypto advocates push for innovation. As frameworks like the GENIUS Act evolve, the battle over $6 trillion in deposits will define the interplay between traditional finance and blockchain.1,3

References

1. https://www.binance.com/sv/square/post/35227018044185

2. https://www.idnfinancials.com/news/60480/bofa-ceo-stablecoins-pay-interest-us6tn-in-bank-deposits-at-risk

3. https://www.emarketer.com/content/stablecoin-rules-jpmorgan-bofa-interest

"You can see upwards of $6 trillion in deposits flow off the liabilities of a banking system... into the stablecoin environment... they're either not going to be able to loan or they're going to have to get wholesale funding and that wholesale funding will come at a cost that will increase the cost of borrowing." - Quote: Brian Moynihan - Bank of America CEO

read more
Term: Right to Win

Term: Right to Win

“The ‘Right to Win’ (RTW) is a company’s unique, sustainable ability to succeed in a specific market by leveraging superior capabilities, products, and a differentiated ‘way to play’ that outperform competitors, giving them a better-than-even chance of creating value and growth.” – Right to Win

A company’s right to win is the recognition that it is better prepared than its competitors to attract and keep the customers it cares about, grounded in a sustainable competitive advantage that extends beyond short-term market positioning.1 This concept represents more than simply having superior resources; it is the ability to engage in any competitive market with a better-than-even chance of success consistently over time.3 The right to win emerges when a company aligns three interlocking strategic elements: a differentiated way to play, a robust capabilities system, and product and service fit that work together coherently.1

The Three Pillars of Right to Win

The foundation of a right to win rests on understanding what your company can do better than anyone else. Rather than pursuing growth indiscriminately across multiple areas, successful organisations focus on identifying three to six differentiating capabilities-the interconnected people, knowledge, systems, tools and processes that create distinctive value to customers.1,5 These capabilities differ fundamentally from assets; whilst assets such as facilities, machinery, and supplier connections can be replicated by competitors, capabilities cannot.1 The critical question becomes: “What do we do well to deliver value?”1

A well-developed way to play represents a chosen position in a market, grounded in understanding your capabilities and where the market is heading.1 This positioning must fulfil four essential criteria: there must be a market that values your approach; it must be differentiated from competitors’ ways to play; it must remain relevant given expected industry changes; and it must be supported by your capabilities system, making it feasible.1 Finally, the product and service fit ensures that offerings are directly aligned with the capabilities system, delivering superior returns to shareholders.1

Coherence acts as the binding agent across these three elements.1 Achieving alignment with one or even two elements proves insufficient; only when all three synchronise with one another and with the right market conditions can a company truly claim a sustainable right to win.1

Building and Sustaining Competitive Advantage

The right to win is not inherited; it is earned through strategic alignment and disciplined execution.2 This requires an in-depth understanding of the competitive landscape, customer expectations, and team capabilities.2 A strategy that leverages unique assets or insights creates a competitive moat, making it challenging for competitors to catch up, though execution remains where many organisations falter.2

Innovation and adaptability prove essential to sustaining this advantage.2 Organisations that continuously evolve, anticipate market shifts, and adapt their goods and services accordingly are more likely to maintain their competitive edge.2 This does not mean chasing every new trend but rather maintaining a keen sense of which innovations align with core competencies and long-term vision.2 Building a culture of excellence-attracting and nurturing top talent, fostering continuous improvement, and encouraging innovation-represents an often-overlooked yet significant asset in securing the right to win.2

Strategic Applications and Growth Pathways

Right-to-win strategies fall into four categories: customer-driven, capability-driven, value-chain-based, and those building on disruptive business models or technologies.4 The most utilised approach involves fulfilling unmet needs for existing customers that the core business does not currently address.4 However, the strategy delivering the biggest revenue gains involves leveraging core business capabilities-such as patents, technological know-how, or brand equity-to expand into adjacent and breakout businesses.4 Companies successfully utilising two or more right-to-win strategies to move into adjacent markets delivered 12 percentage points higher excess total shareholder return versus their subindustry peers.4

Assessing Your Right to Win

Organisations can evaluate their right to win through systematic analysis. This involves identifying the two most relevant competitors, determining three to six differentiating capabilities required for success, listing key assets and table-stakes activities, and rating performance across these dimensions.5 Differentiating capabilities should be specific and interconnected rather than merely listing functions or organisational units.5 For example, one of Apple’s differentiating capabilities is “innovation around customer interfaces to create better communications and entertainment experiences.”5 Assets, whilst less sustainable than capabilities, represent criteria important to the market and warrant inclusion in competitive assessment.5

Related Theorist: C.K. Prahalad and the Core Competence Framework

The concept of right to win draws significantly from the work of C.K. Prahalad (1941-2010), an influential Indian-American business theorist and consultant who fundamentally shaped modern strategic thinking through his development of the core competence framework. Prahalad’s seminal 1990 Harvard Business Review article, co-authored with Gary Hamel, “The Core Competence of the Corporation,” introduced the revolutionary idea that organisations should identify and leverage their unique, hard-to-imitate capabilities rather than pursuing diversification across unrelated business areas.1

Born in Bangalore, India, Prahalad earned his undergraduate degree in physics and mathematics before pursuing business education. He spent much of his career at the University of Michigan’s Ross School of Business, where he conducted extensive research on strategic management and organisational capability. His work challenged the prevailing strategic orthodoxy of the 1980s, which emphasised portfolio management and strategic business units. Instead, Prahalad argued that companies should view themselves as portfolios of core competencies-the collective learning and coordination of diverse production skills and technologies-rather than collections of discrete business units.

Prahalad’s framework directly underpins the right to win concept. He demonstrated that sustainable competitive advantage emerges not from owning assets but from developing distinctive capabilities that competitors cannot easily replicate. His research showed that companies like Sony, Honda, and 3M succeeded not because they possessed superior resources but because they had cultivated unique organisational capabilities in areas such as miniaturisation, engine design, or innovation processes. These capabilities enabled them to enter adjacent markets and create new products that competitors struggled to match.

Beyond core competence theory, Prahalad later developed the concept of the “bottom of the pyramid,” exploring how companies could create right-to-win strategies by serving low-income consumers in emerging markets through innovation and capability leverage. His work emphasised that strategic advantage comes from understanding what your organisation does distinctively well and then systematically building, protecting, and extending those capabilities across markets and customer segments.

Prahalad’s intellectual legacy remains central to contemporary strategic management. His insistence that capabilities-not assets-form the foundation of competitive advantage directly informs how modern organisations approach the right to win. His framework provides the theoretical scaffolding that explains why companies with seemingly fewer resources can outperform better-capitalised competitors: they possess superior, integrated capabilities that create distinctive value. This insight transformed strategic planning from a financial exercise into a capabilities-centred discipline, making Prahalad’s work indispensable to understanding the right to win in contemporary business strategy.

References

1. https://www.pwc.com/mt/en/publications/other/does-your-strategy-give-you-the-right-to-win.html

2. https://multifamilycollective.com/2024/02/strategy-how-do-we-define-our-right-to-win/

3. https://intrico.io/interview-best-practices/right-to-win

4. https://www.mckinsey.com/capabilities/growth-marketing-and-sales/our-insights/next-in-growth/adjacent-business-growth-making-the-most-of-your-right-to-win

5. https://www.strategyand.pwc.com/gx/en/unique-solutions/capabilities-driven-strategy/right-to-win-exercise.html

6. https://steemit.com/quality/@hefziba/the-right-to-play-and-the-right-to-win-and-how-to-design-quality-into-a-product

"The 'Right to Win' (RTW) is a company's unique, sustainable ability to succeed in a specific market by leveraging superior capabilities, products, and a differentiated 'way to play' that outperform competitors, giving them a better-than-even chance of creating value and growth." - Term: Right to Win

read more
Quote: Clayton Christensen

Quote: Clayton Christensen

“What’s important is to get out there and try stuff until you learn where your talents, interests, and priorities begin to pay off. When you find out what really works for you, then it’s time to flip from an emergent strategy to a deliberate one.” – Clayton Christensen – Author

This profound advice from Clayton Christensen encapsulates a timeless principle for personal and professional growth: the value of experimentation followed by focused commitment. Drawn from his bestselling book How Will You Measure Your Life?, the quote urges individuals to embrace trial and error in discovering their true strengths before committing to a structured path. Christensen, a renowned Harvard Business School professor, applies business strategy concepts to life’s big questions, advocating for an initial phase of exploration – termed ’emergent strategy’ – before shifting to a ‘deliberate strategy’ once clarity emerges.1,7

Who Was Clayton Christensen?

Clayton Magleby Christensen (1947-2020) was a Danish-American academic, author, and business consultant whose ideas reshaped management theory. Born in Salt Lake City, Utah, he earned a bachelor’s degree in economics from Brigham Young University, an MBA from Harvard, and a DBA from Harvard Business School. Christensen joined the Harvard faculty in 1992, where he taught for nearly three decades, influencing generations of leaders.1,5

His seminal work, The Innovator’s Dilemma (1997), introduced the theory of disruptive innovation, explaining how established companies fail by focusing on sustaining innovations for current customers while overlooking simpler, cheaper alternatives that disrupt markets from below. This concept has been applied to industries from technology to healthcare, predicting successes like Netflix over Blockbuster. Christensen authored over a dozen books, including The Innovator’s Solution and How Will You Measure Your Life? (2010, co-authored with James Allworth and Karen Dillon), which blends business insights with personal reflections drawn from his Mormon faith, family life, and battle with leukemia.5,6,7

In How Will You Measure Your Life?, Christensen draws parallels between corporate pitfalls and personal missteps, warning against prioritising short-term gains over long-term fulfilment. The quoted passage appears in a chapter on career strategy, using emergent and deliberate strategies as metaphors for navigating life’s uncertainties.7

Context of the Quote: Emergent vs Deliberate Strategy

Christensen distinguishes two strategic approaches, rooted in his research on successful companies. A deliberate strategy stems from conscious planning, data analysis, and long-term goals – ideal for stable, mature organisations like Procter & Gamble, which refines products based on market data.1 It requires alignment across teams, where every member understands their role in the bigger picture. However, it risks blindness to peripheral opportunities, as rigid focus on the original plan can miss disruptions.1,2

Conversely, an emergent strategy arises organically from bottom-up initiatives, experiments, and adaptations – common in startups like early Walmart, which pivoted from small-town stores after unplanned successes. Christensen notes that over 90% of thriving new businesses succeed not through initial plans but by iterating on emergent learnings, retaining resources to pivot when needed.1,5,6

The quote applies this duality to personal development: start with emergent exploration – trying diverse roles, hobbies, and pursuits – to uncover what aligns talents, interests, and priorities. Once viable paths emerge, switch to deliberate focus for sustained progress. This mirrors Honda’s accidental US motorcycle success, where employees’ side experiments trumped the formal plan.6

Leading Theorists on Emergent and Deliberate Strategy

Christensen built on foundational work by Henry Mintzberg, a Canadian management scholar. In his 1987 paper ‘Crafting Strategy’ and book Strategy Safari, Mintzberg challenged top-down planning, arguing strategies often emerge from patterns in daily actions rather than deliberate designs. He identified strategy as a ‘continuous, diverse, and unruly process’, blending deliberate intent with emergent flexibility – ideas Christensen explicitly referenced.2

  • Henry Mintzberg: Pioneered the emergent strategy concept in the 1970s-80s, critiquing rigid corporate planning. His ’10 Schools of Strategy’ framework contrasts design (deliberate) with learning (emergent) schools.2
  • Michael Porter: Christensen’s contemporary at Harvard, Porter championed deliberate competitive strategy via frameworks like the Five Forces and value chain (1980s). While Porter focused on positioning for advantage, Christensen highlighted how such strategies falter against disruption.1
  • Robert Burgelman: Stanford professor whose research on ‘intraorganisational ecology’ influenced Christensen, showing how autonomous units drive emergent strategies within firms like Intel.5

These theorists collectively underscore strategy’s dual nature: deliberate for execution, emergent for innovation. Christensen uniquely extended this to personal life, making abstract theory accessible for leadership, coaching, and self-management.3,4

Christensen’s insights remain vital for leaders balancing adaptability with purpose, reminding us that true success – in business or life – demands knowing when to explore and when to commit.

References

1. https://online.hbs.edu/blog/post/emergent-vs-deliberate-strategy

2. https://onlydeadfish.co.uk/2014/08/28/emergent-and-deliberate-strategy/

3. https://blog.passle.net/post/102fytx/clayton-christensen-how-to-enjoy-business-and-life-more

4. https://www.azquotes.com/quote/1410310

5. https://www.goodreads.com/work/quotes/138639-the-innovator-s-solution-creating-and-sustaining-successful-growth

6. https://www.businessinsider.com/clay-christensen-theories-in-how-will-you-measure-your-life-2012-7

7. https://www.goodreads.com/author/quotes/1792.Clayton_M_Christensen?page=17

8. https://www.azquotes.com/author/2851-Clayton_Christensen/tag/strategy

9. https://www.mstone.dev/values-how-will-you-measure-your-life/

“What’s important is to get out there and try stuff until you learn where your talents, interests, and priorities begin to pay off. When you find out what really works for you, then it’s time to flip from an emergent strategy to a deliberate one.” - Quote: Clayton Christensen

read more
Quote: Jamie Dimon – JP Morgan Chase CEO

Quote: Jamie Dimon – JP Morgan Chase CEO

“I think the harder thing to measure has always been tech projects. That’s been true my whole life. It’s also been true my whole life, the tech is what changes everything, like everything.” – Jamie Dimon – JP Morgan Chase CEO

Jamie Dimon’s candid observation captures a fundamental tension at the heart of modern business strategy: the profound impact of technology juxtaposed against the persistent challenge of measuring its value. Delivered during JPMorgan Chase’s 2026 Investor Day on 24 February, this remark came amid revelations of the bank’s unprecedented $19.8 billion technology budget – a 10% increase from 2025, with significant allocations to artificial intelligence (AI) projects.1,2,4 As CEO of the world’s largest bank by market capitalisation, Dimon’s perspective is shaped by decades of navigating technological shifts, from the rise of digital banking to the current AI boom.

Jamie Dimon’s Career and Leadership at JPMorgan Chase

Born in 1956 in New York City to Greek immigrant parents, Jamie Dimon began his career in finance at American Express in the 1980s, rising rapidly under the mentorship of Sandy Weill. He co-led the merger that created Citigroup in 1998 but parted ways acrimoniously in 2000. Dimon then transformed Bank One from near-collapse into a powerhouse, earning a reputation as a crisis manager. In 2004, he became CEO of JPMorgan Chase following its acquisition of Bank One, a role he has held for over two decades.3

Under Dimon’s stewardship, JPMorgan has become a technology leader in banking. The firm employs over 300,000 people, with tens of thousands in tech roles, and invests billions annually in innovation. Dimon has long championed tech as a competitive moat, famously urging investors to ‘trust him’ on spending despite vague ROI metrics. In 2026, this commitment manifests in a tech budget swelled by $2 billion, driven by AI for customer service, personalised insights, and developer tools, amid rising hardware costs from AI chip demand.1,5 Dimon predicts JPMorgan will be a ‘winner’ in the AI race, leveraging its data assets and No. 1 ranking in AI maturity among banks.1,3

Context of the Quote: JPMorgan’s 2026 Strategic Framework

The quote emerged in a Q&A at the 24 February 2026 event, responding to analyst pressure on tech ROI. CFO Jeremy Barnum highlighted technology as a major expense driver, up $9 billion overall, with $1.2 billion in investments including AI. Dimon acknowledged time savings from tech as ‘too vague’ to measure precisely, echoing lifelong observations from mainframes to cloud computing.1,2 This aligns with broader warnings: AI will revolutionise operations but displace jobs, necessitating societal preparation like retraining and phased adoption to avoid shocks, such as mass unemployment from autonomous trucks.4

JPMorgan is aggressively deploying AI – its large language model serves 150,000 users weekly – while planning ‘huge redeployment’ for affected staff. Executives like Marianne Lake stress paranoia in competition, quoting ‘Only the paranoid survive’. Rivals like Bank of America ($14 billion tech spend) underscore the sector-wide arms race.1

Leading Theorists on Technology Measurement and Impact

Dimon’s views resonate with seminal thinkers on technology’s intangible returns. Peter Drucker, the father of modern management, argued in The Practice of Management (1954) that knowledge workers’ output defies traditional metrics, prefiguring tech’s measurement woes. He coined ‘knowledge economy’, emphasising innovation’s long-term value over short-term quantification.[/latex]

Erik Brynjolfsson and Andrew McAfee, MIT economists, explore this in The Second Machine Age (2014), detailing how digital technologies yield ‘non-rival’ benefits – exponential productivity without proportional costs – hard to capture in GDP or ROI. Their ‘bounty vs. spread’ framework warns of uneven gains, mirroring Dimon’s job displacement concerns.4

Clayton Christensen’s The Innovator’s Dilemma (1997) explains why incumbents struggle with disruptive tech: metrics favour sustaining innovations, blinding firms to transformative ones. JPMorgan’s shift from infrastructure modernisation to AI-ready data exemplifies overcoming this.5

In AI specifically, Nick Bostrom’s Superintelligence (2014) and Stuart Russell’s Human Compatible (2019) address measurement beyond finance – aligning superintelligent systems with human values amid unpredictable impacts. Dimon’s pragmatic focus on phased integration echoes calls for cautious deployment.4

These theorists underscore Dimon’s point: technology’s true worth lies in reshaping ‘everything’, demanding faith in leadership over precise yardsticks. JPMorgan’s strategy embodies this, positioning the bank at the vanguard of finance’s technological frontier.

References

1. https://www.businessinsider.com/jpmorgan-tech-budget-ai-20-billion-jamie-dimon-2026-2

2. https://www.aol.com/articles/jpmorgan-spend-almost-20-billion-000403027.html

3. https://www.benzinga.com/markets/large-cap/26/02/50808191/jamie-dimon-predicts-jpmorgan-will-be-a-winner-in-ai-race-boosts-2026-tech-spend-to-nearly-20-billion

4. https://fortune.com/2026/02/25/jamie-dimon-society-prepare-ai-job-displacement/

5. https://finviz.com/news/321869/how-to-play-jpm-stock-as-tech-spend-ramps-in-2026-amid-ai-uncertainty

6. https://fintechmagazine.com/news/inside-jpmorgans-2026-stock-market-hopes-and-new-london-hq

"I think the harder thing to measure has always been tech projects. That's been true my whole life. It's also been true my whole life, the tech is what changes everything, like everything." - Quote: Jamie Dimon - JP Morgan Chase CEO

read more
Term: World model

Term: World model

“A world model is defined as a learned neural representation that simulates the dynamics of an environment, enabling an AI agent to predict future states and reason about the consequences of its actions.” – World model

A **world model** is an internal representation of the environment that an AI system creates to simulate the external world within itself. This learned neural representation enables an AI agent to predict future states, simulate the consequences of different actions before executing them in the real world, and reason about causal relationships, much like the human brain does when planning activities.1,3,6

At its core, a world model comprises key components:

  • Transition model: Predicts how the environment’s state changes based on the agent’s actions, such as a robot displacing an object by moving its hand.1
  • Observation model: Determines what the agent observes in each state, incorporating data from sensors, cameras, and other inputs.1
  • Reward model: In reinforcement learning contexts, forecasts rewards or penalties from actions in specific states.1

Unlike traditional machine learning, which maps inputs directly to outputs, world models foster a general understanding of environmental dynamics, enhancing performance in novel situations.1,4

Key Capabilities and Advantages

World models empower AI with:

  • Causality understanding: Grasping why events occur, beyond mere statistical correlations seen in large language models (LLMs) like GPT.1,2
  • Planning and reasoning: Simulating scenarios internally to select optimal actions, akin to chain-of-thought reasoning.1,3
  • Efficient learning: Requiring fewer examples, similar to a child grasping gravity after minimal observations.1
  • Transfer learning and generalisation: Applying knowledge across domains, such as adapting object manipulation skills.1
  • Intuitive physics: Comprehending basic physical principles, essential for real-world interaction.1,4

Trained on diverse data like videos, photos, audio, and text, world models provide richer grounding in reality than LLMs, which focus on text patterns.2,4,6

Role in Achieving Artificial General Intelligence (AGI)

Prominent figures like Yann LeCun (Meta), Demis Hassabis (Google DeepMind), and Yoshua Bengio (Mila) view world models as crucial for AGI, enabling safe, scientific, and intelligent systems that plan ahead and simulate outcomes.3 Recent advancements, such as DeepMind’s Genie 3 (August 2025), generate diverse 3D environments from text prompts, simulating realistic physics for AI training.1 Runway’s GWM-1 further advances general-purpose simulation for robotics and discovery.5

Best Related Strategy Theorist: Yann LeCun

**Yann LeCun**, Chief AI Scientist at Meta and a pioneer of convolutional neural networks (CNNs), is the foremost theorist championing world models as foundational for intelligent AI. LeCun describes them as internal predictive models that simulate real-world dynamics, incorporating modules for perception, prediction, cost/reward evaluation, and planning. This allows AI to ‘imagine’ action consequences, vital for robotics, autonomous vehicles, and AGI.2,3

Born in 1960 in France, LeCun earned his PhD in 1987 from Universite Pierre et Marie Curie, Paris, under supervision of Yves Le Cun (no relation). He popularised CNNs in the 1980s-1990s for handwriting recognition, co-founding the field of deep learning. Joining New York University as a professor in 2003, he co-directed the NYU Center for Data Science. In 2013, he became Meta’s first AI head, driving open-source initiatives like PyTorch.

LeCun’s advocacy for world models stems from his critique of LLMs’ limitations in causal reasoning and physical simulation. He argues they enable ‘objective-driven AI’ with energy-based models for planning, positioning world models as the path beyond pattern-matching to human-like intelligence. A Turing Award winner (2018) with Bengio and Hinton, LeCun’s vision influences labs worldwide, emphasising world models for safe, efficient real-world AI.2,3

References

1. https://deepfa.ir/en/blog/world-model-ai-agi-future

2. https://www.youtube.com/watch?v=qulPOUiz-08

3. https://www.quantamagazine.org/world-models-an-old-idea-in-ai-mount-a-comeback-20250902/

4. https://www.turingpost.com/p/topic-35-what-are-world-models

5. https://runwayml.com/research/introducing-runway-gwm-1

6. https://techcrunch.com/2024/12/14/what-are-ai-world-models-and-why-do-they-matter/

"A world model is defined as a learned neural representation that simulates the dynamics of an environment, enabling an AI agent to predict future states and reason about the consequences of its actions." - Term: World model

read more
Term: AI Data Centre

Term: AI Data Centre

“An AI Data Center is a highly specialized, power-dense physical facility designed specifically to train, deploy, and run artificial intelligence (AI) models, machine learning (ML) algorithms, and generative AI applications.” – AI Data Centre

This specialised facility diverges significantly from traditional data centres, which handle mixed enterprise workloads, by prioritising accelerated compute, ultra-high-bandwidth networking, and advanced power and cooling systems to manage dense GPU clusters and continuous data pipelines for AI tasks like model training, fine-tuning, and inference.1,2,4

Central to its operation are high-performance computing resources such as Graphics Processing Units (GPUs) and Tensor Processing Units (TPUs). GPUs excel in parallel processing, enabling rapid handling of billions of data points essential for AI model training, while TPUs offer tailored efficiency for AI-specific tasks, reducing energy consumption.2,3,5

High-speed networking is critical, employing technologies like InfiniBand, 400 Gbps Ethernet, and optical interconnects to facilitate seamless data movement across thousands of servers, preventing bottlenecks in distributed AI workloads.2,4

Robust storage systems-including distributed file systems and object storage-ensure swift access to vast datasets, model weights, and real-time inference data, with scalability to accommodate ever-growing AI requirements.1,2,3

Addressing the immense power density, advanced cooling systems are vital, often accounting for 35-40% of energy use, incorporating liquid cooling and thermal zoning to maintain efficiency and low Power Usage Effectiveness (PUE) for sustainability.2,4

Additional features include data centre automation, network security, and energy-efficient designs, yielding benefits like enhanced performance, scalability, cost optimisation, and support for innovation in fields such as big data analytics, natural language processing, and computer vision.3,5

Key Theorist: Jensen Huang and the GPU Revolution

The foremost strategist linked to the evolution of AI data centres is Jensen Huang, co-founder, president, and CEO of NVIDIA Corporation. Huang’s vision has positioned NVIDIA’s GPUs as the cornerstone of modern AI infrastructure, directly shaping the architecture of these power-dense facilities.2

Born in 1963 in Taiwan, Huang immigrated to the United States as a child. He earned a bachelor’s degree in electrical engineering from Oregon State University and a master’s from Stanford University. In 1993, at age 30, he co-founded NVIDIA with Chris Malachowsky and Curtis Priem, initially targeting 3D graphics for gaming and PCs. Huang recognised the parallel processing power of GPUs, pivoting NVIDIA towards general-purpose computing on GPUs (CUDA platform, launched 2006), which unlocked their potential for scientific simulations, cryptography, and eventually AI.2

Huang’s prescient relationship to AI data centres stems from his early advocacy for GPU-accelerated computing in machine learning. By 2012, Alex Krizhevsky’s use of NVIDIA GPUs to win the ImageNet competition catalysed the deep learning boom, proving GPUs’ superiority over CPUs for neural networks. Under Huang’s leadership, NVIDIA developed AI-specific hardware like A100 and H100 GPUs, Blackwell architecture, and full-stack solutions including InfiniBand networking via Mellanox (acquired 2020). These innovations address AI data centre challenges: massive parallelism for training trillion-parameter models, high-bandwidth interconnects for multi-node scaling, and power-efficient designs for dense racks consuming up to 100kW each.2,4

Huang’s biography reflects relentless innovation; he famously wore a black leather jacket onstage, symbolising his contrarian style. NVIDIA’s market cap surged from $3 billion in 2015 to over $3 trillion by 2024, propelled by AI demand. His strategic foresight-declaring in 2017 that “the era of AI has begun”-anticipated the hyperscale AI data centre boom, making NVIDIA indispensable to leaders like Microsoft, Google, and Meta. Huang’s influence extends to sustainability, pushing for efficient cooling and low-PUE designs amid AI’s energy demands.4

Today, virtually every major AI data centre relies on NVIDIA technology, underscoring Huang’s role as the architect of the AI infrastructure revolution.

References

1. https://www.aflhyperscale.com/articles/ai-data-center-infrastructure-essentials/

2. https://www.rcrwireless.com/20250407/fundamentals/ai-optimized-data-center

3. https://www.racksolutions.com/news/blog/what-is-an-ai-data-center/

4. https://www.f5.com/glossary/ai-data-center

5. https://www.lenovo.com/us/en/glossary/what-is-ai-data-center/

6. https://www.ibm.com/think/topics/ai-data-center

7. https://www.generativevalue.com/p/a-primer-on-ai-data-centers

8. https://www.sunbirddcim.com/glossary/data-center-components

"An AI Data Center is a highly specialized, power-dense physical facility designed specifically to train, deploy, and run artificial intelligence (AI) models, machine learning (ML) algorithms, and generative AI applications." - Term: AI Data Centre

read more
Quote: Clayton Christensen

Quote: Clayton Christensen

“Culture is a way of working together toward common goals that have been followed so frequently and so successfully that people don’t even think about trying to do things another way. If a culture has formed, people will autonomously do what they need to do to be successful.” – Clayton Christensen – Author

Clayton M. Christensen, the renowned Harvard Business School professor and author, offers a piercing definition of culture that underscores its invisible yet commanding influence on human behaviour. Drawn from his seminal 2010 book How Will You Measure Your Life?, this observation emerges from Christensen’s broader exploration of how personal and professional success hinges on aligning daily actions with enduring principles.1,2 The book, blending business acumen with life lessons, distils decades of research into practical wisdom for leaders, managers, and individuals navigating career and family demands.1,3

Christensen’s Life and Intellectual Journey

Born in 1952 in Salt Lake City, Utah, Christensen rose from humble roots to become one of the most influential thinkers in business strategy. A devout Mormon, he integrated faith with rigorous analysis, viewing truth in science and religion as harmonious.2,4 Educated at Brigham Young University, Oxford as a Rhodes Scholar, and Harvard Business School, he joined Harvard’s faculty in 1989. His breakthrough came with The Innovator’s Dilemma (1997), introducing disruptive innovation – the theory explaining how market-leading firms falter by ignoring low-end or new-market disruptions.5 This framework, applied across industries from steel to smartphones, earned him global acclaim and advisory roles with Intel, Kodak, and others.

Christensen’s later works, including How Will You Measure Your Life?, shift from corporate strategy to personal integrity. Co-authored with Jeff Dyer and Hal Gregersen, it warns against marginal compromises – ‘just this once’ temptations – that erode character over time.3 He argued management is ‘the most noble of professions’ when it fosters growth, motivation, and ethical behaviour.2,3 Stricken with leukemia in 2017 and passing in 2020, Christensen left a legacy of over 150,000 citations and millions of books sold, emphasising that true metrics of life lie in helping others become better people.2,4

The Context of the Quote in Christensen’s Philosophy

In How Will You Measure Your Life?, the quote illuminates how organisations – and lives – succeed through ingrained habits. Christensen posits that culture forms when proven paths to common goals become automatic, enabling autonomous action without constant oversight.1 This ties to his ‘resources, processes, priorities’ (RPP) framework: resources fuel action, processes habitualise it, and priorities direct it.2,4 A strong culture aligns these, creating ‘seamless webs of deserved trust’ that propel success, echoing his warnings against short-termism where leaders chase loud demands over lasting value.3

He contrasts virtuous cultures fostering positive-sum interactions and lucky breaks with toxic ones breeding zero-sum games and isolation.3 For leaders, cultivating culture means framing work to motivators – purpose, progress, relationships – so employees end days fulfilled, much like Christensen’s own ‘good day’ model.2

Leading Theorists on Organisational Culture

Christensen’s views build on foundational theorists who dissected culture’s role in management and leadership.

  • Edgar Schein (1935-2023): In Organizational Culture and Leadership (1985), Schein defined culture as ‘a pattern of shared basic assumptions’ learned through success, mirroring Christensen’s ‘frequently and successfully followed’ paths. Schein’s levels – artefacts, espoused values, basic assumptions – explain why entrenched cultures resist change, much like Christensen’s processes becoming ‘crushing liabilities’.5
  • Charles Handy (1932-2024): The Irish management guru’s Understanding Organizations (1976) classified cultures (power, role, task, person), influencing Christensen’s emphasis on autonomous success. Handy’s gods of management archetype underscores culture’s ritualistic hold.
  • Stephen Covey (1932-2012): In The 7 Habits of Highly Effective People (1989), Covey urged ‘keeping the main thing the main thing’ via principle-centred leadership, aligning with Christensen’s priorities and family-career balance.3
  • Peter Drucker (1909-2005): The ‘father of modern management’ declared ‘culture eats strategy for breakfast’, a maxim Christensen echoed by prioritising cultural processes over mere resources.5
  • Charles Munger (1924-2023): Berkshire Hathaway’s vice chairman complemented Christensen, praising ‘the right culture’ as a ‘seamless web of deserved trust’ enabling weak ties and serendipity.3

These thinkers collectively affirm culture as the bedrock of sustained performance, where unconscious alignment trumps enforced compliance. Christensen’s insight, rooted in their legacy, equips leaders to build environments where success feels inevitable.

References

1. https://www.goodreads.com/quotes/7256080-culture-is-a-way-of-working-together-toward-common-goals

2. https://www.toolshero.com/toolsheroes/clayton-christensen/

3. https://www.skmurphy.com/blog/2020/02/16/clayton-christensen-on-how-will-you-measure-your-life/

4. https://quotefancy.com/clayton-m-christensen-quotes/page/2

5. https://www.azquotes.com/author/2851-Clayton_Christensen

6. https://memories.lifeweb360.com/clayton-christensen/a0d52888-de6d-4246-bce9-26d9aaee0aac

“Culture is a way of working together toward common goals that have been followed so frequently and so successfully that people don’t even think about trying to do things another way. If a culture has formed, people will autonomously do what they need to do to be successful.” - Quote: Clayton Christensen

read more
Quote: Jeremy Barnum – Executive VP and CFO of JP Morgan Chase

Quote: Jeremy Barnum – Executive VP and CFO of JP Morgan Chase

“We’re growing. We’re onboarding new clients. In many cases, I’m looking at some of my colleagues on the corporate and investment bank, the growth in new clients comes with lending. That lending is relatively low returning then you eventually get other business. So yes, that’s an example of an investment today that as it matures, has higher returns.” – Jeremy Barnum – Executive VP & CFO of JP Morgan Chase

Jeremy Barnum, Executive Vice President and Chief Financial Officer of JPMorgan Chase, shared this perspective during a strategic framework and firm overview executive Q&A on 24 February 2026. His remarks underscore a core tenet of modern banking: initial client acquisition often demands upfront investments in low-margin activities like lending, which pave the way for higher-return opportunities as relationships mature.[SOURCE]

Barnum’s career trajectory exemplifies the blend of analytical rigour and strategic foresight essential for leading one of the world’s largest financial institutions. Joining JPMorgan Chase in 2007 as a managing director in treasury and risk management, he ascended rapidly through roles in investor relations and corporate development. By 2021, he was appointed CFO, succeeding Jennifer Piepszak, who transitioned to co-CEO of the commercial and investment bank. Under Barnum’s stewardship, JPMorgan has navigated volatile markets, including the acquisition of Goldman Sachs’ Apple Card portfolio, which contributed to a $2.2 billion pre-tax credit reserve build in Q4 2025, even as net income reached $13 billion and revenue climbed 7% to $46.8 billion.1

In the broader context of this quote, Barnum was addressing investor concerns about growth dynamics in the corporate and investment banking (CIB) division. New client onboarding frequently begins with lending – a relatively low-return activity due to compressed margins and credit risks – but evolves into a fuller ecosystem of services, including advisory, trading, and capital markets activities that deliver superior profitability over time. This ‘investment today for returns tomorrow’ model aligns with JPMorgan’s 2026 expense projections of $105 billion, driven by ‘structural optimism’ and the imperative to invest in technology, AI, and competitive positioning against fintech challengers like Revolut and SoFi, as well as traditional rivals like Charles Schwab.1

The discussion occurred against a backdrop of heightened competitive and regulatory pressures. Just weeks earlier, in January 2026, Barnum warned of the perils of President Donald Trump’s proposed 10% cap on credit card interest rates, arguing it would curtail credit access for higher-risk borrowers – ‘the people who need it the most’ – and force lenders to scale back operations in a fiercely competitive landscape.2,3 Consumer and community banking revenue rose 6% year-over-year to $19.4 billion, bolstered by 7% growth in card services, yet such policies threaten this momentum. JPMorgan’s tech budget is set to surge by $2 billion to $19.8 billion in 2026, emphasising investments to maintain primacy.5

Leading theorists on relationship banking and client lifecycle management provide intellectual foundations for Barnum’s approach. Jay R. Ritter, a pioneer in IPO and capital-raising research at the University of Florida, has long documented how initial public offerings often underperform short-term but enable firms to access deeper capital markets over time – a parallel to banking’s lending-to-ecosystem progression. Similarly, Arnoud W.A. Boot, a professor at the University of Amsterdam and ECB Shadow Monetary Policy Committee member, theorises in works like ‘Relationship Banking and the Death of the Middleman’ (2000) that banks derive sustained value from ‘household-specific’ information built through ongoing relationships, transforming low-margin entry points into high-return sticky business.

Robert M. Townsend, Caltech economist and Nobel laureate (2011, with Finn Kydland), extends this through his incomplete contracting models, showing how banks mitigate asymmetric information via repeated interactions, justifying upfront lending as a commitment device for future profitability. More contemporarily, Viral V. Acharya of NYU Stern emphasises in IMF and BIS papers the ‘credit ecosystem’ where initial low-yield loans signal credibility, unlocking cross-selling in a post-2008 regulatory environment marked by Basel III capital constraints. These frameworks validate JPMorgan’s strategy: lending as the ‘hook’ in a maturing client portfolio amid rising competition and policy risks.

Barnum’s comments, delivered mere hours before this analysis (on 25 February 2026), reflect real-time strategic clarity. As JPMorgan projects resilience in consumer and small business segments, this philosophy positions the firm to convert today’s investments into enduring leadership.1,4

References

1. https://fortune.com/2026/01/14/jpmorgan-ceo-cfo-staying-competitive-requires-investment/

2. https://www.businessinsider.com/jpmorgan-warning-on-credit-card-cap-interest-2026-1

3. https://neworleanscitybusiness.com/blog/2026/01/13/jpmorgan-credit-card-rate-cap-warning/

4. https://www.marketscreener.com/news/jpmorgan-cfo-jeremy-barnum-speaks-at-investor-update-ce7e5dd3db8ff425

5. https://www.aol.com/news/jpmorgan-spend-almost-20-billion-000403027.html

"We're growing. We're onboarding new clients. In many cases, I'm looking at some of my colleagues on the corporate and investment bank, the growth in new clients comes with lending. That lending is relatively low returning then you eventually get other business. So yes, that's an example of an investment today that as it matures, has higher returns." - Quote: Jeremy Barnum - Executive VP & CFO of JP Morgan Chase

read more
Term: Edge devices

Term: Edge devices

“Edge devices are physical computing devices located at the ‘edge. of a network, close to where data is generated or consumed, that run AI algorithms and models locally rather than relying exclusively on a centralised cloud or data center.” – Edge devices

Edge devices integrate edge computing with artificial intelligence, enabling real-time data processing on interconnected hardware such as sensors, Internet of Things (IoT) devices, smartphones, cameras, and industrial equipment. This local execution reduces latency to milliseconds, enhances privacy by retaining data on-device, and alleviates network bandwidth strain from constant cloud transmission.1,4,5

Unlike traditional cloud-based AI, where data travels to remote servers for computation, edge devices perform tasks like predictive analytics, anomaly detection, speech recognition, and machine vision directly at the source. This supports applications in autonomous vehicles, smart factories, healthcare monitoring, retail systems, and wearable technology.2,3,6

Key Characteristics and Benefits

  • Low Latency: Processes data in real time without cloud round-trips, critical for time-sensitive scenarios like defect detection in manufacturing.3,4
  • Bandwidth Efficiency: Reduces data transfer volumes by analysing locally and sending only aggregated insights to the cloud.1,5
  • Enhanced Privacy and Security: Keeps sensitive data on-device, mitigating breach risks during transmission.5,6
  • Offline Capability: Operates without constant internet connectivity, ideal for remote or unreliable networks.6,8

Best Related Strategy Theorist: Dr. Andrew Chi-Chih Yao

Dr. Andrew Chi-Chih Yao, a pioneering computer scientist, stands as the most relevant strategy theorist linked to edge devices through his foundational contributions to distributed computing and efficient algorithms, which underpin modern edge AI architectures. Born in Shanghai, China, in 1946, Yao earned his PhD from Harvard University in 1972 under advisor Patrick C. Fischer. He held faculty positions at MIT, Princeton, and Stanford before joining Tsinghua University in 2004 as Director of the Institute for Interdisciplinary Information Sciences (IIIS), dubbed the ‘Chinese Springboard for talents in computer science’.[external knowledge basis]

Yao’s relationship to edge devices stems from his seminal work on parallel and distributed algorithms, including the Yao minimax principle for computational complexity (1970s), which optimises resource allocation in decentralised systems-directly analogous to edge computing’s local processing paradigm. His PRAM (Parallel Random Access Machine) model formalised efficient parallelism on resource-constrained devices, influencing how AI models are deployed on edge hardware with limited power and compute.[external knowledge basis] Notably, Yao’s research on communication complexity minimises data exchange between nodes, mirroring edge devices’ strategy of local inference to cut cloud dependency-a core tenet echoed in edge AI literature.1,7

A Turing Award winner (2000) for contributions to computation theory, Yao’s strategic vision emphasises scalable, efficient computing at the periphery, shaping industries from IoT to AI. His mentorship of talents like Jack Ma (Alibaba founder) further extends his influence on practical deployments of edge technologies in global supply chains.

References

1. https://www.ibm.com/think/topics/edge-ai

2. https://www.micron.com/about/micron-glossary/edge-ai

3. https://zededa.com/glossary/edge-ai-computing/

4. https://www.flexential.com/resources/blog/beginners-guide-ai-edge-computing

5. https://www.splunk.com/en_us/blog/learn/edge-ai.html

6. https://www.f5.com/glossary/what-is-edge-ai

7. https://www.cisco.com/site/us/en/learn/topics/artificial-intelligence/what-is-edge-ai.html

8. https://blogs.nvidia.com/blog/what-is-edge-ai/

"Edge devices are physical computing devices located at the 'edge. of a network, close to where data is generated or consumed, that run AI algorithms and models locally rather than relying exclusively on a centralised cloud or data center." - Term: Edge devices

read more

Download brochure

Introduction brochure

What we do, case studies and profiles of some of our amazing team.

Download

Our latest podcasts on Spotify

Sign up for our newsletters - free

Global Advisors | Quantified Strategy Consulting