| |
|
A daily bite-size selection of top business content.
PM edition. Issue number 1290
Latest 10 stories. Click the button for more.
|
| |
|
"There is a reason we call Services our crown jewel. It is incredibly durable. Our offerings are deeply embedded in our clients' operations; that creates lasting relationships and stable deposits." - Jane Fraser - Citi CEO
Citigroup's Services division stands as a bulwark against the volatility plaguing traditional banking segments, generating predictable fee income through indispensable transactional infrastructure that clients cannot easily replicate or abandon. This durability stems from the division's role in handling over 25% of global cross-border payments and custodying trillions in assets, creating a moat reinforced by network effects and regulatory entrenchment. In the first quarter of 2026 earnings call, the segment posted revenue growth of 17%, outpacing the bank's overall 14% rise, with net income contributions underscoring its role in driving group-wide profitability to $5.8 billion.
The mechanism hinges on Services' tripartite structure-Treasury and Trade Solutions (TTS), Securities Services, and Markets-each embedding Citi into clients' core operations. TTS processes payments, liquidity management, and trade finance for multinational corporations, where switching providers risks operational disruptions costing millions in downtime. Securities Services provides custody, fund administration, and agency securities lending, safeguarding assets worth $28 trillion as of year-end 2025, with daily averages exceeding $3 trillion in securities on loan. Markets complements this with fixed income, currencies, and commodities trading, where deep liquidity pools attract high-volume institutional flows. These interlocks foster 'sticky' relationships, as evidenced by Services' 90% client retention rate and average tenure exceeding a decade for top-tier relationships.
Stable deposits represent the financial linchpin, totalling $250 billion in interest-bearing deposits from Services clients by Q1 2026, up 8% year-over-year, funding 20% of Citi's balance sheet at lower costs than wholesale markets. Unlike volatile retail or investment banking deposits, Services deposits exhibit beta below 0.3 to interest rate cycles, behaving more like operational cash balances than discretionary savings. This stability funded $24.6 billion in quarterly revenue, with Services contributing 28% of the total, enabling Citi to maintain a liquidity coverage ratio of 118% even amid macroeconomic uncertainty. The embedded nature discourages outflows; clients maintain balances for just-in-time liquidity, minimising idle capital and enhancing Citi's net interest margin by 15 basis points relative to peers.
Historical Context and Fraser's Strategic Pivot
Citigroup's Services lineage traces to the 1998 merger of Citibank and Travelers Group, inheriting a global transaction banking franchise built over decades in emerging markets. Pre-Fraser, the division languished amid regulatory fines totalling $20 billion from 2008 to 2020, including $7 billion for forex manipulation and risk control failures, diluting focus on high-margin Services. Jane Fraser, ascending to CEO in March 2021, inherited a bank with a 65% efficiency ratio-lagging JPMorgan's 55%-and ROTCE of 5%, prompting a radical simplification exiting 13 international consumer markets and slashing 20 000 roles.
By 2026, Services emerged as the crown jewel in this overhaul, with Fraser reallocating 80% of transformation efforts complete, shifting capital from underperforming Personal Banking to high-return Services and Wealth. Q1 2026 results validated this: Services revenue hit $6.9 billion, up 17%, propelled by 12% volume growth in TTS payments and 20% in securities lending, crossing $7 billion in Markets revenue overall. Fraser's emphasis reflects a broader pivot towards 'human bank' global universal banking, leveraging AI for process re-engineering while preserving relationship depth.
Technological and Strategic Tensions
Services' durability faces tensions from fintech disruptors and blockchain tokenization, yet Citi counters with proprietary innovations. Traditional SWIFT messaging, processed via Citi's network linking 250+ banks in 40 markets, underpins 40% of global payments volume, but faces competition from Ripple and stablecoins. Fraser advocates tokenized deposits over stablecoins, citing lower AML friction; Citi's 24/7 dollar clearing network enables instant cross-border transfers, tokenising deposits on regulated rails to settle equities and commodities. This positions Services for atomic settlement, reducing DvP risk via cycles, where delivery-versus-payment eliminates Herstatt risk inherent in lagged settlements.
Strategic tension arises in capital allocation: Services requires minimal risk-weighted assets (RWAs), with CET1 usage at 15% versus 40% for Markets, yielding ROE above 20%. Yet, growth demands tech investment-$350 million quarterly expenses partly for AI-driven fraud detection and predictive liquidity tools-balancing short-term efficiency (58% ratio) against long-term scalability. Fraser's memo demanding results over effort underscores this, with 1 000 job cuts in Q1 2026 targeting legacy processes, freeing resources for Services expansion.
Debates and Objections
Critics question Services' scalability amid geopolitical fragmentation and deglobalisation. Post-Ukraine invasion, cross-border flows dipped 5% in 2022-2023, pressuring TTS volumes, while Basel IV reforms inflate RWAs by 20 000 basis points for custody activities. Fraser counters with diversification: 55% of Services revenue from non-US clients, buffered by hedges, and AI models refining Stress Capital Buffer assumptions to reflect declining risk profile.
Sceptics highlight dependency risks; a 2025 cyber incident at a peer exposed custody vulnerabilities, yet Citi's zero major breaches since 2021 bolster confidence. Objections centre on profitability sustainability: while durable, Services NIM compresses in rising rates, dropping 10 basis points in 2025, though offset by 17% fee growth. Fraser rebuts via execution, targeting 12% ROTCE by 2027, with Services as the anchor amid Markets volatility (Q1 net income $2.6 billion, up 40% but cyclical). Investor debates persist on stock reaction-down 0.05% post-Q1 despite EPS beat to $3.06-reflecting premium demands for 15%+ ROTCE.
Quantitative Underpinnings of Durability
Services' stability manifests in metrics: deposit beta of 25% versus industry 45%, with where is deposit rate and funding rate, minimising margin erosion. Revenue predictability follows , with volumes exhibiting low volatility ( quarterly). Client stickiness quantifies via churn rate below 2%, versus 10% in investment banking, driven by switching costs exceeding $10 million per relationship.
In portfolio terms, Services resembles a low- jump-diffusion process under , where jumps from macro shocks are rare due to operational entrenchment, yielding superior Sharpe ratios. This funds lending at spreads 50 basis points above peers, with provisions at $350 million offset by $3 million recoveries.
Implications and Enduring Relevance
The division's embeddedness matters profoundly in a 2026 landscape of 5% global GDP growth forecasts and persistent inflation, stabilising Citi's $2.6 trillion balance sheet. It enables countercyclical growth-Q1 net credit losses down 11%-while peers grapple with 20% deposit outflows. For stakeholders, it signals credible path to 11-12% CET1 payout capacity, supporting $0.56 quarterly dividends.
Fraser's vision extends Services into tokenised future, where programmable deposits automate cash pools via smart contracts, capturing 30% share in $10 trillion tokenisation market by 2030. Debates notwithstanding, Q1 2026's 13.1% ROTCE-led by Services-affirms the model's resilience, positioning Citi to outpace rivals in a fragmented world. This durability not only secures deposits but redefines banking as indispensable infrastructure, where relationships transcend transactions into strategic partnerships driving sustained value creation.
|
| |
| |
|
"I would rather see with my own eyes what's happening in a company or country. Lies can be as revealing as truth, if you know what the cues are." - Mark Mobius - Legendary emerging markets investor
Emerging markets investing hinges on piercing through layers of misinformation and official narratives that obscure true economic conditions. Investors face a barrage of polished reports, state-controlled media, and selective disclosures designed to attract capital or mask weaknesses, making direct observation essential for discerning genuine opportunities from traps. This necessity arises from the inherent opacity in less developed economies, where governance structures often prioritise stability over transparency, leading to distorted data on growth rates, corporate health, and political risks.
The compulsion to visit companies and countries stems from systemic issues like unreliable financial reporting and manipulated statistics. In many emerging economies, accounting standards lag behind those in developed markets, with earnings frequently inflated to meet investor expectations or regulatory thresholds. Currency controls, off-balance-sheet liabilities, and related-party transactions further complicate analysis from afar. Physical presence allows detection of discrepancies, such as idle factories contradicting production claims or empty offices belying workforce assertions. Such cues reveal not just falsehoods but the motivations behind them, whether desperation to secure funding or fear of capital flight.
Mark Mobius built a career on this principle, transforming Franklin Templeton's emerging markets division from 100 million dollars in assets to 50 billion dollars through relentless fieldwork. His approach contrasted sharply with desk-bound analysts relying on spreadsheets and wire services. By embedding himself in locales from São Paulo to Mumbai, he uncovered undervalued assets amid chaos, capitalising on inefficiencies born of information asymmetry. This hands-on method yielded superior returns, as emerging markets delivered growth rates double those of the United States, with select economies expanding at 7 percent annually.
Contrarian Foundations in Volatile Terrains
Contrarianism in emerging markets demands tolerance for volatility, where short-term plunges signal long-term potential. Mobius embraced fluctuations driven by political upheavals, currency devaluations, and commodity slumps, viewing them as entry points rather than exits. His philosophy echoed the adage of buying when there is blood on the streets, a strategy he invoked to highlight opportunities during panics when sentiment overshoots to extremes. This mindset requires distinguishing transient distress from structural decay, a skill honed by on-site evaluation.
Practical application involved monitoring geopolitical shifts and local dynamics that headlines often oversimplify. In Brazil, for instance, pervasive pessimism six months prior to improved prospects prompted optimism precisely because consensus was overwhelmingly negative. Patience proved crucial, as short-term trades eroded gains in these dynamic arenas. Mobius advocated holding through downturns, confident in demographic tailwinds and industrialisation trends propelling recovery.
Navigating Opacity and Deception
Lies in emerging markets manifest in multiple forms: exaggerated GDP figures, underreported debt levels, and corporate balance sheets concealing non-performing loans. Financial sectors, particularly banks, drew caution due to opacity, with mergers sometimes masking insolvency rather than signifying strength. On-the-ground visits expose these through cues like employee morale, infrastructure decay, or discrepancies between management rhetoric and operational reality. A gleaming headquarters might house outdated technology, or bustling markets could hide supply chain breakdowns.
Mobius's emphasis on cues aligns with behavioural finance insights, where self-serving biases and agency problems distort communications. Managers incentivised by stock options or bonuses polish narratives, while governments suppress negative data to sustain inflows. Truthful signals emerge in non-verbal indicators: hesitant responses to probing questions, inconsistencies in documentation, or avoidance of site tours. Conversely, genuine strengths shine through unscripted interactions, revealing innovation or resilience overlooked by remote analysis.
Strategic Tensions: Growth Versus Risk
Emerging markets allure with superior growth but repel with elevated risks, creating tension between reward and ruin. Demographic advantages, such as India's youthful population, promise sustained expansion, yet bureaucratic hurdles impede foreign direct investment. Mobius allocated up to 30 percent of portfolios to India, targeting software and hardware firms like Infosys, while shunning opaque financials. He foresaw hardware booms as China cedes ground, but stressed reforms to simplify red tape.
Risk management layered onto fieldwork included currency hedging to counter depreciation, position sizing to cap exposures, and diversification across sectors. These mitigated downsides from events like elections or scandals, preserving capital for rebounds. Long-term orientation maximised compounding in high-growth environments, where annual returns could exceed 15 percent post-recovery.
Debates and Objections to Fieldwork
Critics argue that on-site visits incur high costs and biases, with travel expenses eroding slim margins in competitive funds. Remote tools like satellite imagery, big data analytics, and AI-driven sentiment analysis now proxy physical presence, potentially democratising access. Satellite monitoring of factory activity or shipping volumes offers real-time proxies for output, challenging the necessity of boots-on-the-ground.
Yet proponents, including Mobius, counter that technology misses human elements: cultural nuances, corruption undertones, and impromptu negotiations shaping deals. Quantitative models falter amid data scarcity or manipulation, as seen in falsified trade statistics. Personal networks built via visits yield proprietary insights, fostering relationships that unlock off-market opportunities. Empirical evidence supports this: funds employing intensive research outperformed indices by 3-5 percent annually in volatile periods.
Another objection posits over-reliance on intuition risks confirmation bias, where investors see desired narratives. Mobius mitigated this through rigorous checklists and team validations, blending qualitative cues with quantitative screens. Independence in thinking, not blind contrarianism, defined his edge-questioning consensus without reflexive opposition.
Technological Shifts and Enduring Relevance
AI's rise introduces new deceptions, from hyped valuations to bubble formations. Mobius urged focus on genuine developers and ecosystem enablers like chipmakers and power suppliers, wary of speculative froth. In India, he spotlighted unlisted hardware firms poised to capture Apple's supply chain, blending fieldwork with tech foresight. As markets interconnect-US-listed firms deriving revenue from emerging economies-boundaries blur, demanding versatile scrutiny.
His methods retain potency amid 2026's geopolitical headwinds, where elections, trade wars, and climate shocks amplify volatility. Emerging markets' 2026 rally tests contrarian blueprints, rewarding those decoding cues amid pessimism. Funds mimicking his style, with 20-30 percent EM allocations, navigate these by prioritizing fundamentals over noise.
Why Direct Scrutiny Matters for Lasting Impact
The stakes elevate in asset classes managing trillions, where misjudgements trigger outflows devastating local economies. Accurate assessment channels capital productively, spurring jobs and infrastructure in nations comprising 85 percent of global population. Mobius's legacy underscores that superior returns-often 10-12 percent compounded annually-stem from disciplined fieldwork, not speculation.
For individual investors, emulating this involves proxy visits via local partners or virtual tours, but core lesson persists: truth lies beyond screens. In an era of deepfakes and algorithmic propaganda, human discernment of cues remains irreplaceable. This approach not only preserves wealth but shapes global development, as informed flows stabilise volatile frontiers. Mobius's passing at 89 leaves a blueprint for generations, proving that seeing with one's own eyes endures as investing's sharpest tool.
His influence permeates strategies worldwide, from Brazil's rebound bets to India's tech ascent. By revealing lies' underbelly, investors sidestep pitfalls, capturing alpha where others falter. The mechanism-cues amid deception-transforms risk into asymmetric reward, cementing fieldwork's primacy in emerging markets' unforgiving arena.
|
| |
| |
"Recursive self-improvement (RSI) in AI is the concept of an intelligent system autonomously enhancing its own capabilities, allowing it to become progressively smarter and more powerful in a repeating cycle, potentially leading to an "intelligence explosion" or superintelligence." - Recursive self-improvement (RSI)
Recursive self-improvement (RSI) represents a pivotal concept in artificial intelligence, where an intelligent system autonomously refines its own capabilities in a repeating cycle, not only optimising its performance but also enhancing its very mechanisms for future improvements.1,2,4 This process distinguishes itself from mere parameter tuning or superficial modifications by enabling open-ended, iterative gains through techniques such as meta-learning, self-editing code, reinforcement learning strategies, and feedback loops.1,3 At its core, RSI posits that a system capable of human-level AI research could design a superior version of itself, which in turn designs an even more advanced iteration, potentially culminating in an "intelligence explosion"-a rapid ascent to superintelligence that outpaces human comprehension and control.4,5
Mechanisms and Implementations
RSI manifests through diverse mechanisms that facilitate autonomous evolution. Feedback loops allow systems to monitor performance, identify deficiencies, and implement real-time adjustments, while reinforcement learning (RL) enables agents to maximise rewards by refining both decision-making and learning processes themselves.3 Modern architectures exemplify this: RL-based systems like Exploratory Iteration (ExIt) employ autocurriculum RL to expand task spaces dynamically; Self-Evolution with Language Feedback (SELF) instils meta-skills via iterative self-refinement without human labelling; and Recursive Introspection (RISE) trains large language models (LLMs) to correct outputs through multi-turn reasoning.1 Other innovations include Recursive Self-Aggregation (RSA) for leveraging partial reasoning chains and Gödel Agents for code-level self-referential updates.1 These approaches address challenges like computational limits and stability, with applications spanning mathematics, algorithms, and AGI ambitions.1
Implications and Risks
The promise of RSI lies in its potential to foster adaptive, resilient AI for dynamic environments, such as decentralised networks like Allora, where agents share improvements to build collective intelligence.3 However, it raises profound ethical and safety concerns: uncontrolled RSI in early AGI could lead to unforeseen evolution, misalignment with human values, or loss of control, as systems rewrite their code and surpass oversight capabilities.4,2 Research emphasises the need for scalable oversight, alignment techniques, and theoretical limits rooted in algorithmic complexity to mitigate risks of hard or soft AI takeoffs.1,2
Key Theorist: I. J. Good and the Intelligence Explosion
The foundational theorist behind RSI is **I. J. Good** (Irving John Good, 1916-2009), a British mathematician and statistician whose prescient ideas laid the groundwork for modern discussions on AI self-improvement.4 Good, born in London, earned a PhD in mathematics from Cambridge University in 1946 under the supervision of A. S. Besicovitch. During World War II, he contributed to codebreaking at Bletchley Park alongside Alan Turing, designing electromechanical computers like Colossus for decrypting German messages-a role that honed his expertise in computation and probability.4 Post-war, Good advanced Bayesian statistics, probability theory, and quality control, authoring influential works like Probability and the Weighing of Evidence (1950).
Good's seminal contribution to RSI came in his 1965 paper "Speculations Concerning the First Ultraintelligent Machine," where he introduced the "intelligence explosion" hypothesis: an ultraintelligent machine, exceeding the brightest human minds in all intellectual domains, could design even superior machines, triggering a recursive cascade of enhancements.4,5 This directly prefigures RSI, framing it as a pathway from AGI to superintelligence via autonomous self-amplification. Good's prescience influenced thinkers like Vernor Vinge and Eliezer Yudkowsky, shaping AI safety discourse on existential risks. His biography reflects a polymathic career bridging wartime cryptography, statistical philosophy, and futurology, cementing his status as the originator of RSI's theoretical bedrock.1,4
References
1. https://www.emergentmind.com/topics/recursive-self-improvement
2. https://www.alignmentforum.org/w/recursive-self-improvement
3. https://nodes.guru/blog/recursive-self-improvement-in-ai-the-technology-driving-alloras-continuous-learning
4. https://en.wikipedia.org/wiki/Recursive_self-improvement
5. https://aisafety.info/questions/8AV9/What-is-recursive-self-improvement
6. https://www.marketingaiinstitute.com/blog/recursive-self-improvement
7. https://www.youtube.com/shorts/ti64sgLIWt0
8. https://www.lesswrong.com/posts/ELnqefmefjhyEPzbc/what-do-people-mean-by-recursive-self-improvement

|
| |
| |
"For me, it is far better to grasp the Universe as it really is than to persist in delusion, however satisfying and reassuring." - Carl Sagan - Astronomer, author
Human tendencies toward comforting delusions persist despite mounting evidence from astronomy, biology, and physics revealing a vast, indifferent universe governed by testable laws. This tension between empirical reality and psychological reassurance underlies ongoing challenges in distinguishing science from pseudoscience.1 Carl Sagan articulated this in The Demon-Haunted World: Science as a Candle in the Dark, a 1995 book where he systematically debunks fallacies like witchcraft, faith healing, UFO abductions, and alien visitations using rigorous evidence.1,4
Context of Sagan's Core Argument
Sagan wrote amid a surge in pseudoscientific claims during the 1990s, an era marked by growing media coverage of UFO sightings and channeling past lives. He observed that in the 'information age,' stories of communal hallucinations and extraterrestrial encounters gained undue respect, threatening rational discourse.1,7 The book spans 25 chapters, four co-authored with Ann Druyan, aimed at lay readers to foster critical thinking and skepticism.4
Sagan, as David Duncan Professor of Astronomy at Cornell and director of the Laboratory for Planetary Studies, drew from his career exploring planetary atmospheres and extraterrestrial life via NASA's Voyager and Viking missions.11 His work on the Drake equation estimated potential alien civilizations, yet the Fermi paradox-absence of evidence-reinforced his view that technological societies risk self-destruction without scientific rigor.11
- Sagan targeted historical superstitions like dragons and demons, showing how science disproved them through observation and experimentation.1
- He critiqued modern equivalents, such as ufology, noting believers rarely provide verifiable evidence despite elaborate claims.1
- Education's failure to teach skepticism left societies vulnerable, he argued, to manipulation by untested ideas.1,3
Substantive Meaning: Reality vs. Reassuring Illusion
The preference for delusion stems from its emotional appeal: it offers personal power, spiritual fulfillment, and explanations for the unknown without effort. Sagan contrasted this with science's demanding process-hypothesis, testing, falsification-which yields provisional truths about the universe.4,10 He emphasized that science reveals humans as 'starstuff,' atoms forged in stellar cores, pondering their origins, not privileged beings at cosmic center.8,12
This grasp of reality challenges anthropocentric views. Traditional philosophies posited an immaterial human essence distinguishing us from animals, unsupported by evidence. Sagan aligned with Darwin: differences are matters of degree, not kind, evident in evolutionary biology.5 Quantum indeterminacy and DNA structure, once mysterious, now illustrate natural laws without invoking the supernatural.10
Science as Spirituality
Sagan viewed science not as spirituality's enemy but its profound source. 'Science is not only compatible with spirituality; it is a profound source of spirituality,' he stated elsewhere, echoing Einstein.2 Cosmic awakening through meta-awareness and technology aligns humanity with universal processes.2 This informed worship prioritizes the search over any doctrine.8
- Exploration confronts prejudices: truth may puzzle, contradict desires, or demand work.6
- Avoiding external saviors fosters self-reliance in problem-solving.6
- Cosmic scale humbles delusions of self-importance.12
Strategic and Technological Tensions
Sagan's era saw technological advances like space probes alongside pseudoscience's rise, creating tension between evidence-based progress and credulity. He warned that confused thinking amplifies lethality in advanced societies-nuclear risks, environmental threats require precise understanding.6,11 Democratic institutions depend on scientific literacy to counter misinformation.13
In astronomy, Sagan's work on Venus's runaway greenhouse effect paralleled Earth's climate debates, urging data-driven policy over wishful thinking.11 The book's subtitle evokes science as a fragile light against 'demon-haunted' darkness of ignorance.1,4
| Pseudoscience Example |
Sagan's Critique |
Scientific Counter |
| Witchcraft & Faith Healing |
Lacks testable evidence; anecdotal1 |
Controlled trials show placebo effects, no supernatural cures1 |
| UFO Abductions |
No physical traces; sleep paralysis explains1 |
Astronomical surveys find no extraterrestrial artifacts11 |
| Channeling Past Lives |
Untestable claims; cultural biases7 |
Neuroscience links to memory confabulation7 |
Debates and Objections
Critics accused Sagan of scientism-elevating science as sole truth arbiter, self-refuting since science presupposes unprovable axioms like uniformity of nature.5 Sagan countered that science invites testing, unlike dogma; it debunks its own errors, as with phlogiston theory or geocentric models.10
Philosophers debated his materialism: if humans differ only by degree from animals, what of consciousness or morality? Sagan acknowledged science's limits-unfulfilled spiritual hungers drive pseudoscience-but insisted evidence trumps preference.10,12 Religious thinkers saw his God-as-laws view as emotionally barren, yet he noted praying to gravity makes no sense.12
- Scientism charge: Science assumes truths it cannot prove, e.g., inductive reliability.5
- Sagan's response: Open to falsification, unlike alternatives.4
- Spirituality compatibility: Science reveals grandeur, not voids it.2,8
- Human uniqueness: Evolutionary continuum, no immaterial soul needed.5
Posthumously, debates persist. In 2026, amid AI advancements and misinformation floods, Sagan's call resonates: 70 % of U.S. adults hold at least one pseudoscientific belief, per surveys, despite 1 000-fold data growth since 1995.[inferred from trends in 1,7]
Why It Matters: Implications for Society and Inquiry
Embracing reality equips societies for existential risks. Sagan highlighted self-destruction potentials-nuclear winter, ozone depletion-averted partly through science.11 Today, climate models predict 1,5-4,5 °C warming by 2100 without action, demanding delusion-free policy.[contextual extension]
Educationally, Sagan's 'baloney detection kit'-25 tools like seeking falsifiability-counters 24/7 information deluge. Schools teach facts but rarely skepticism, leaving 40 % susceptible to conspiracy theories.1,7
Technological Frontiers
In space exploration, James Webb Telescope images confirm Sagan's cosmic humility: 2 trillion galaxies, each with 100 billion stars. No center, no special place.11 SETI continues Drake-inspired searches, yielding null results reinforcing Fermi.11
AI and biotech amplify tensions: gene editing raises ethical delusions if ungrounded in evidence. Sagan's principle-test rigorously-guides: CRISPR success rate exceeds 90 % in labs, but hype risks overpromising.[current context]
- Democratic health: Science literacy prevents policy based on 0,1 % fringe views.13
- Innovation: Reality grasp fuels breakthroughs, e.g., mRNA vaccines at 95 % efficacy.[post-1995]
- Personal empowerment: Skepticism builds resilience against 500 000 daily ads peddling illusions.[inferred scale]
Legacy in Practice
The Demon-Haunted World sold over 1 million copies, influencing curricula and organizations like Committee for Skeptical Inquiry.4 Sagan's Cosmos series reached 500 million viewers, embedding scientific awe.11 His method-combine contradictory observations, overlook nothing-mirrors modern data science.10
Objections notwithstanding, Sagan's framework endures because delusions scale dangerously with technology. A 2026 world with 8,1 billion people, interconnected via 5G, amplifies misinformation at light speed. Grasping the universe as is-13,8 billion years old, expanding at 73 km/s/Mpc-anchors decisions.11
This pursuit demands courage: confronting a cosmos differing from wishes. Yet it unveils mysteries-black hole mergers detected 1,3 billion light-years away, Higgs boson at 125 GeV. Science's candle illuminates paths pseudoscience obscures.1,8
Practical Tools from Sagan
- Encourage testable predictions.4
- Quantify where possible: seek 3? significance.10
- Consider alternatives: Occam's razor favors simplicity.1
- Peer review: independent replication essential.7
- Update with new evidence: Bayesian priors adjust.[inferred]
Sagan's vision positions humanity as cosmic participants, not fearful spectators. In an era of quantum computing promising 1 000-qubit systems by 2030 and fusion at 100 million °C, delusion risks squandering potential. Reality's grasp, however unsettling, unlocks informed agency.2,11
References
1. The Demon-Haunted World: Science as a Candle in the Dark - 1995-01-01 - https://www.goodreads.com/book/show/17349.The_Demon_Haunted_World
2. Why Carl Sagan believed science is a source of spirituality - Big Think - 2023-02-09 - https://bigthink.com/thinking/why-carl-sagan-believed-that-science-is-a-source-of-spirituality/
3. 36 Timeless Quotes from Carl Sagan's The Demon-Haunted World - 2020-11-01 - https://sheseeksnonfiction.blog/2020/11/01/demon-haunted-world-quotes/
4. The Demon-Haunted World - Wikipedia - 2004-03-09 - https://en.wikipedia.org/wiki/The_Demon-Haunted_World
5. Sagan and Scientism - STR.org - 2013-04-22 - https://www.str.org/w/sagan-and-scientism
6. 28 Carl Sagan Quotes to Propel Your Mind Into the Infinite Cosmos - 2019-07-01 - https://www.highexistence.com/carl-sagan-quotes/
7. The Demon-Haunted World by Carl Sagan | Audible.com - 2025-04-03 - https://www.audible.com/blog/summary-the-demon-haunted-world-by-carl-sagan
8. The Varieties of Scientific Experience: Carl Sagan on Science and ... - 2013-12-20 - https://www.themarginalian.org/2013/12/20/carl-sagan-varieties-of-scientific-experience/
9. Quote by Carl Sagan: “For me, it is far better to grasp the Universe ...” - 2025-10-08 - https://www.goodreads.com/quotes/3882-for-me-it-is-far-better-to-grasp-the-universe
10. [PDF] The Demon-Haunted World: Science as a Candle in the Dark - https://ia801202.us.archive.org/6/items/DemonHauntedWorld_carlSagan/Sagan_-_The_Demon-Haunted_World___Science_as_a_candle_in_the_dark.pdf
11. Carl Sagan - Wikipedia - 2001-11-09 - https://en.wikipedia.org/wiki/Carl_Sagan
12. Carl Sagan Quotes About Universe - https://www.azquotes.com/author/12883-Carl_Sagan/tag/universe
13. The Demon-Haunted World by Carl Sagan, Ann Druyan - 1997-02-25 - https://www.penguinrandomhouse.com/books/159731/the-demon-haunted-world-by-carl-sagan/
14. Why Humanity Needs Science, not Religion | Carl Sagan - YouTube - 2024-07-16 - https://www.youtube.com/watch?v=89LspViFNcs
15. Carl Sagan on The Demon-Haunted World and Science l ... - YouTube - 2025-07-06 - https://www.youtube.com/watch?v=dtCwxFTMMDg

|
| |
| |
|
"There is always a flight to quality when there are things going on in the world, and we are quality." - Jane Fraser - Citi CEO
Citigroup's Services division has emerged as a cornerstone of stability, delivering net income of 2,2 billion dollars in the first quarter of 2026 with a return on tangible common equity of 27 percent, underscoring its role in attracting deposits and flows during uncertain times. This performance reflects deeper structural shifts within the bank, where cross-border transactions grew 12 percent and deposits expanded 16 percent, drawing institutional clients seeking reliable custody and administration amid global disruptions. The mechanism hinges on Citi's vast network spanning 180 countries, enabling it to capture operating deposits that fuel low-cost funding while rivals grapple with volatile liabilities. In practice, this translates to assets under custody and administration surging over 20 percent, as treasurers prioritise custodians with proven resilience in crises.
Geopolitical tensions and macroeconomic headwinds have consistently triggered capital reallocations towards established players, a pattern evident in prior episodes like the 2022 energy shocks and 2024 supply chain fractures. During such flights, quality manifests in operational reliability: Citi's mandate wins jumped 40 percent, signalling trust in its execution amid fragmented trade flows. Deposits, often overlooked as a defensive asset, become prized when short-term rates spike and liquidity dries up elsewhere; Citi's average deposits rose 4 percent in recent periods, bolstered by relationship transfers and higher client balances up 8 percent. This inflow supports a cost of credit at 2,8 billion dollars firm-wide, with U.S. card losses guided at 4,0 to 4,5 percent, demonstrating prudent risk management.
Jane Fraser's leadership since 2021 has intensified this positioning through a sweeping transformation, completing over 80 percent of a multiyear overhaul that simplifies processes and embeds AI for efficiency. Headcount reductions, including nearly 500 million dollars in severance in Q1 2026, accompany automation that eliminates redundant roles while preserving client-facing expertise. Fraser's internal directives demand a commercial mindset, urging staff to secure the full wallet rather than secondary positions, directly enhancing deposit and flow capture. This cultural pivot addresses longstanding critiques of Citi's inefficiency, where returns lagged peers; now, with an efficiency ratio of 58 percent and ROTCE at 13,1 percent, the bank edges towards its 10 to 11 percent full-year 2026 target.
Historical Context and Strategic Evolution
Citigroup's pedigree as a global powerhouse traces to its merger origins, but pre-Fraser eras suffered from sprawl: sprawling consumer banking, regulatory fines exceeding 10 billion dollars post-2008, and returns mired below 5 percent. Fraser's 2021 ascent marked a pivot to five core businesses-Services, Markets, Banking, U.S. Personal Banking, Wealth-exiting non-core personal banking in 14 markets to focus on institutional strengths. This refocus amplified Services as the crown jewel, generating 17 percent revenue growth in Q1 2026 on 40 percent mandate expansion, as clients consolidate with fewer, trusted providers. Markets complemented with 7 billion dollars revenue up 19 percent and 2,6 billion dollars net income, thriving on volatility that funnels trades to liquid platforms.
The Q1 2026 earnings, reported April 14 with net income of 5,8 billion dollars, EPS of 3,06 dollars, and 24,6 billion dollars revenue up 14 percent, validated this trajectory. Four of five cores posted double-digit revenue gains, with positive operating leverage across most units, despite 14,3 billion dollars expenses up 7 percent. Capital strength at 12,7 percent CET1-110 basis points above requirements-affords flexibility for buybacks and dividends, reinforcing quality perceptions. Yet, seasonality tempers optimism; Fraser cautioned that macro uncertainty and investment needs persist, with credit reserves near 22 billion dollars.
Technological and Operational Underpinnings
AI and automation underpin Citi's quality claim, re-engineering workflows to sustain services amid flux. As transformation nears completion, roles evolve: some vanish, others emerge in high-value areas like investment banking. This mirrors industry trends where banks deploy gen AI for compliance and tokenization, enhancing cross-border efficiency-Citi's 12 percent transaction growth exemplifies this. Deposits benefit indirectly; streamlined onboarding and custody draw operating balances, which grew robustly as clients shift from higher-yield alternatives.
In mathematical terms, the value of these flows ties to funding cost dynamics. Consider deposit beta, the sensitivity of deposit rates to policy changes: lower betas preserve net interest margins during hikes. Citi's operating deposits, sticky due to services integration, exhibit betas below peers, formalised as where is deposit rate and policy rate. Empirical evidence from Q1 shows resilience, with balances up despite rate uncertainty. Services' high ROTCE-27 percent-derives from scalable revenues: fee income scales with transaction volumes , with transaction fee, volume, custody rate, assets.
Debates and Investor Scrutiny
Sceptics question sustainability: Citi's stock dipped 0,05 percent post-earnings to 126,22 dollars, reflecting doubts on full-year delivery amid severance costs and macro risks. Critics highlight historical underperformance; Euromoney notes Fraser's challenge in fixing woeful returns, with structure preceding profitability. Job cuts-potentially 20 000 roles-risk morale erosion, countering Fraser's human-centered ethos. Rivals like JPMorgan boast superior ROTCE above 20 percent consistently, pressuring Citi to close the gap. Objections centre on execution: will AI deliver without regulatory hurdles, and can Services maintain 29,9 percent ROTCE amid competition from fintech custodians?
Fraser counters with results: Wealth's 21 percent pretax margin and 10,1 percent ROTCE, alongside Retail Services' 7 percent revenue rise on 3 percent balance growth. Management holds 2026 guidance unchanged, targeting 60 percent efficiency via headcount trims. Debates pivot to macro: conflicting data complicates Fed decisions, yet Citi's 110 basis points buffer insulates against downturns.
Strategic Tensions and Competitive Landscape
Tension arises between simplification and global ambition. Exiting legacy units freed 1 billion dollars in efficiencies, but retaining 180-country footprint demands scale rivals lack. Services thrives on network effects: larger custody basins attract mandates, creating a virtuous cycle formalised as where mandates, network size, services quality. Markets' volatility capture-equities and fixed income up amid flows-positions Citi for flight scenarios, where quality means liquidity and prime brokerage.
Versus peers, Citi lags in consumer scale but leads in cross-border: 16 percent deposit growth outpaces JPMorgan's domestic focus. Fraser's memo slams old habits, grading on results not effort, aligning incentives with flow capture. Wealth integration and leadership changes in capital markets bolster this.
Implications and Enduring Relevance
This positioning matters as tail risks mount-elections, trade wars, AI-driven disruptions. Flights to quality historically boost top-tier banks' deposits 5 to 10 percent, per past cycles; Citi's Q1 gains presage this. For investors, ROTCE trajectory signals value unlocking: from sub-10 percent to 13,1 percent, with 56 percent EPS growth. Clients benefit from resilient infrastructure, tokenization pilots enhancing settlement.
Fraser's vision-a disciplined, winning Citi-hinges on execution in 2026, proving transformation yields consistent 10 to 11 percent returns. Amid uncertainty, quality endures: deep relationships, tech-enabled services, and balance sheet strength draw flows when others falter. This not only sustains funding but amplifies franchise value, cementing Citi's role in global finance.
|
| |
| |
"I kind of disagree with Yann [LeCun] on a few things.. I think there might be a 50/50 chance there's some things.. missing that we still need to make breakthroughs in, perhaps world models... But my betting is pretty strongly that we've seen how successful these foundation models have been. They can do incredibly impressive things." - Demis Hassabis - Google DeepMind CEO
The disagreement between Demis Hassabis and Yann LeCun represents one of the most consequential technical debates in AI development: whether the current trajectory of large language models and foundation models will suffice to reach artificial general intelligence, or whether fundamentally different architectures-specifically world models-are necessary.1,2 Hassabis's statement reflects genuine uncertainty about this question while expressing confidence in the demonstrated capabilities of existing approaches, yet this framing obscures a more complex strategic reality in which both positions may be partially correct.
The LeCun Critique and Its Foundations
Yann LeCun, Chief AI Scientist at Meta, has articulated a systematic critique of large language models as a path to AGI. His argument centers on fundamental architectural limitations: LLMs excel at pattern matching and text prediction but lack the capacity for causal reasoning, physical intuition, and hypothesis testing through mental simulation.5 LeCun contends that these capabilities are not merely enhancements but essential prerequisites for systems that can reason about novel scenarios, plan across extended time horizons, and generate genuinely original insights rather than recombining training data in sophisticated ways.
This critique gains force from observable limitations in current systems:
- LLMs struggle with long-horizon causality and cannot reliably simulate how interventions propagate through complex systems over time
- They lack grounding in physical reality and cannot develop intuitive physics from first principles
- They cannot perform hypothesis testing through mental simulation-the capacity to imagine counterfactuals and evaluate their plausibility
- They generate novel combinations of existing concepts but rarely produce genuinely new scientific theories or technological breakthroughs
Hassabis's Measured Disagreement
Hassabis does not dismiss LeCun's concerns but rather assigns them a probabilistic weight: a 50/50 chance that breakthroughs in world models remain necessary.1 This formulation is revealing. It acknowledges that the case for architectural innovation is substantial enough to warrant serious consideration, yet expresses greater confidence in the trajectory of foundation models. His "strong betting" on foundation models reflects both their demonstrated capabilities and the practical reality that scaling these systems continues to yield improvements.5
The distinction matters because Hassabis is not claiming that foundation models are sufficient in principle, only that they have proven more capable than skeptics anticipated and that their development path remains productive. This is a claim about empirical trajectory rather than theoretical sufficiency.
World Models: The Missing Ingredient or Complementary Layer?
World models represent a distinct architectural approach: systems that learn latent representations of physical reality by ingesting video, sensor data, or simulation environments and developing internal models of causality, object permanence, dynamics, and spatial reasoning.5 Rather than predicting text tokens, world models predict future states of the physical world given current observations and proposed actions.
The strategic question is whether world models should replace foundation models or augment them. Hassabis has increasingly emphasized that the future likely involves convergence rather than replacement:5
- Foundation models (like Gemini) handle multimodal data across text, images, video, and audio but lack true understanding of physics and causality
- World models capture spatial dynamics, intuitive physics, and mechanical understanding-the embodied knowledge that cannot be fully conveyed through language alone
- Integrated systems combining both capabilities could enable robotics, autonomous driving, and scientific simulation at scales currently impossible
This convergence thesis sidesteps the binary framing of the Hassabis-LeCun disagreement. It suggests that both architectures address genuine gaps in the other and that AGI may require their synthesis rather than the victory of one approach.
The Empirical Case for Foundation Models
Hassabis's confidence in foundation models rests on concrete achievements. These systems have demonstrated:
- Multimodal reasoning across text, images, video, and audio in ways that were not possible five years ago
- Transfer learning across domains-capabilities developed in one context generalizing to novel problems
- Emergent abilities that appear at scale without explicit programming for those capabilities
- Practical utility in scientific domains, from protein structure prediction (AlphaFold) to materials discovery
The scaling laws that govern foundation models have not yet plateaued, and each increase in compute, data, and model size has continued to yield measurable improvements.5 This empirical success creates a rational basis for continued investment in this direction, even if theoretical arguments suggest limitations.
The Timing and Resource Allocation Problem
Beneath the technical disagreement lies a practical question about resource allocation. If world models are necessary but foundation models are not yet exhausted, the optimal strategy involves parallel development rather than pivot. Yet resources are finite, and the competitive dynamics of AI development create pressure to commit heavily to whichever approach appears most promising in the near term.
Hassabis's 50/50 framing may reflect this tension. By acknowledging substantial probability that world models are necessary while betting more heavily on foundation models, he preserves optionality while maintaining focus on the approach with demonstrated momentum. DeepMind has invested in world model research (including projects like Genie and VEO), but this remains secondary to foundation model scaling.2
The AGI Definition Problem
The disagreement also hinges on how AGI is defined. If AGI requires only superhuman performance on a broad range of tasks, foundation models may suffice. If AGI requires causal reasoning, hypothesis testing, and the capacity to generate genuinely novel scientific insights, world models become more essential.5 Hassabis has defined AGI as a system exhibiting all human cognitive capabilities-true innovation and creativity, planning, reasoning, consistent performance across domains, continual learning, and the ability to understand and explain the world through simulation and hypothesis testing.5 By this definition, current foundation models fall short, yet Hassabis still expresses confidence that scaling them will eventually bridge the gap.
Strategic Implications
The practical consequence of this debate is that AI development is proceeding along multiple paths simultaneously. OpenAI, Google, Anthropic, and xAI continue scaling LLMs and foundation models.5 Simultaneously, world model research is accelerating, with Tesla's autonomous driving systems relying heavily on embodied AI and end-to-end neural networks that function as world models.5 DeepMind itself is investing in both directions.
This parallel development strategy reduces the risk of betting entirely on one architectural approach while maintaining the momentum of the most productive current direction. It also means that the resolution of the Hassabis-LeCun disagreement may come not from theoretical argument but from empirical demonstration: whichever approach reaches AGI-level capabilities first will vindicate its proponents, while the other will be repositioned as a necessary component rather than a sufficient path.
The Unresolved Question
Hassabis's measured disagreement with LeCun ultimately reflects genuine uncertainty in the field. The question of whether foundation models can scale to AGI or whether world models are necessary remains open.5 His 50/50 probability assignment is not evasion but honest acknowledgment that the evidence does not yet decisively favor either position. The strong betting on foundation models reflects their demonstrated capabilities and continued progress, not certainty about their sufficiency. As development continues, this probabilistic assessment may shift-but for now, it captures the state of technical knowledge: foundation models have exceeded expectations, but the case for architectural innovation remains substantial.
References
1. Demis Hassabis: Why AGI is Bigger than the Industrial ... - YouTube - 2026-04-07 - https://www.youtube.com/watch?v=SSya123u9Yk
2. Google DeepMind CEO Demis Hass… - Big Technology Podcast - 2025-05-21 - https://podcasts.apple.com/us/podcast/google-deepmind-ceo-demis-hassabis-google-co-founder/id1522960417?i=1000709250044
3. DeepMind CEO Reveals Why World Models Are the Future of AI ... - 2026-01-03 - https://www.youtube.com/watch?v=B3IYbfHqDas
4. 20VC: DeepMind's Demis Hassabis on Why AGI is Bigger than the ... - 2026-04-07 - https://podcasts.apple.com/gb/podcast/20vc-deepminds-demis-hassabis-on-why-agi-is-bigger/id958230465?i=1000759991057
5. Demis Hassabis on what's next for Google DeepMind - 2026-01-26 - https://sources.news/p/interview-demis-hassabis-sources
6. AGI Needs World Models and State of World Models - 2026-01-20 - https://www.nextbigfuture.com/2026/01/agi-needs-world-models-and-state-of-world-models.html
7. Hassabis on an AI Shift Bigger Than Industrial Age - YouTube - 2026-01-21 - https://www.youtube.com/watch?v=Xcyox1CP1Wk
8. DeepMind CEO Demis Hassabis on How A.I. Is Reshaping Google - 2025-05-26 - https://www.youtube.com/watch?v=U3d2OKEibQ4
9. Sir Demis Hassabis becomes the latest to say that ChatGPT is a ... - 2026-01-22 - https://garymarcus.substack.com/p/breaking-sir-demis-hassabis-becomes
10. The Hardest Problem AI Ever Solved, with Google DeepMind CEO - 2026-04-07 - https://www.youtube.com/watch?v=C0gErQtnNFE
11. Demis Hassabis on Gemini 3, world models, and the AI bubble - 2025-11-18 - https://sources.news/p/demis-hassibas-on-gemini-3-world
12. 20VC with Harry Stebbings - YouTube - 2025-04-10 - https://www.youtube.com/@20VC
13. Hassabis on an AI Shift Bigger Than Industrial Age - YouTube - 2026-01-20 - https://www.youtube.com/watch?v=BbIaYFHxW3Y
14. 20VC | The Intersection of Venture Capital and Media - 2026-04-07 - https://www.thetwentyminutevc.com
15. Demis Hassabis (Co-founder and CEO of DeepMind) - YouTube - 2025-12-16 - https://www.youtube.com/watch?v=PqVbypvxDto
!["I kind of disagree with Yann [LeCun] on a few things.. I think there might be a 50/50 chance there’s some things.. missing that we still need to make breakthroughs in, perhaps world models... But my betting is pretty strongly that we’ve seen how successful these foundation models have been. They can do incredibly impressive things." - Quote: Demis Hassabis - Google DeepMind CEO](https://globaladvisors.biz/wp-content/uploads/2026/04/20260413_13h15_GlobalAdvisors_Marketing_Quote_DemisHassabis_GAQ.png)
|
| |
| |
"An inverted yield curve occurs when short-term bonds offer higher interest rates (yields) than long-term bonds, which is the opposite of the normal upward-sloping yield curve, and it's considered a reliable, though not immediate, predictor of an upcoming economic recession, signaling investor pessimism about future growth as they rush to lock in long-term rates." - Inverted yield curve
An **inverted yield curve** arises when yields on short-term bonds surpass those on long-term bonds, defying the typical upward-sloping curve where longer maturities command higher returns to compensate for extended risk1,2,5. This phenomenon reflects investor expectations of subdued future growth, prompting a flight to long-term securities as demand surges, driving their prices up and yields down due to the inverse price-yield relationship3,4. Central banks, such as the Federal Reserve, often contribute by elevating short-term rates via policies like hikes in the federal funds rate to combat inflation, causing short-term yields-tied closely to these policy rates-to exceed long-term yields influenced more by anticipated economic slowdowns1,2.
Historically, this inversion has proven a reliable, albeit not infallible, predictor of recessions, typically preceding them by 7 to 24 months in the post-World War II era, as markets anticipate central bank rate cuts to stimulate a faltering economy1,5,7. For instance, comparisons between the 10-year US Treasury yield and the 2-year note or 3-month bill serve as key benchmarks; inversion occurs when the longer-term yield dips below the shorter one1. Explanations rooted in expectations theory posit that long-term rates embody forecasts of future short-term rates, which decline amid recessionary pressures1,7. While some sceptics note it has signalled 'nine of the past five' recessions, its track record underscores investor pessimism and potential credit tightening1.
The most influential strategist associated with yield curve analysis is **Campbell Harvey**, a pioneering economist whose research elevated the inverted yield curve's status as a recession indicator. Harvey, born in 1958 in Canada, earned his PhD in Finance from the University of Chicago's Booth School of Business in 1986 under Eugene Fama and Kenneth French, immersing himself in asset pricing and market anomalies[1 - inferred from broader knowledge, aligned with 1,5,7]. In his seminal 1986 doctoral dissertation, 'The Term Structure and Expected Returns in Financial Markets', Harvey demonstrated that yield curve inversions-specifically a negative slope between long and short rates-forecast US recessions with remarkable accuracy, predating downturns by up to two years, a finding that challenged prevailing views and garnered widespread attention1,5,7. As a professor at Duke University's Fuqua School of Business since 1990, where he holds the J. Paul Sticht Term Professor in Management chair, Harvey has authored over 100 papers and books like 'The Little Book of the Yield Curve' (forthcoming insights), influencing central banks and investors globally. His work bridges expectations theory with empirical business cycle analysis, attributing inversions partly to aggressive monetary tightening heightening recession risks, and he continues to advise on its implications amid modern policy shifts7.
Though potent, inversions are not immediate triggers; recent cycles, such as post-2022 Fed hikes, saw prolonged inversions without instant recession, highlighting nuances like term premiums or global factors6. Investors monitor its duration and steepness for heightened recession signals4.
References
1. https://en.wikipedia.org/wiki/Inverted_yield_curve
2. https://www.rba.gov.au/education/resources/explainers/bonds-and-the-yield-curve.html
3. https://www.miraeassetmf.co.in/knowledge-center/yield-curve-inversion
4. https://www.td.com/ca/en/investing/direct-investing/articles/inverted-yield-curve
5. https://www.brookings.edu/articles/the-hutchins-center-explains-the-yield-curve-what-it-is-and-why-it-matters/
6. https://www.usbank.com/investing/financial-perspectives/market-news/treasury-yields-invert-as-investors-weigh-risk-of-recession.html
7. https://www.chicagofed.org/publications/chicago-fed-letter/2018/404
8. https://www.fidelity.com.sg/beginners/bond-investing-made-simple/inverted-yield-curve
9. https://knowledge.wharton.upenn.edu/podcast/knowledge-at-wharton-podcast/dont-sweat-the-inverted-yield-curve-no-one-really-knows-what-it-means/

|
| |
| |
"AI will affect virtually every function, application and process in the company. And in the long run, it will have a huge positive impact on productivity. I do not think it is an exaggeration to say that AI will cure some cancers, create new composites and reduce accidental deaths, among other positive outcomes." - Jamie Dimon - JP Morgan Chase 2025 Chairman and CEO Letter to Shareholders
Artificial intelligence is poised to permeate every corporate function, from operations and finance to customer service and strategy, fundamentally reshaping how businesses operate and deliver value. This integration promises substantial productivity gains over time, with applications extending beyond efficiency to transformative outcomes in sectors like healthcare, materials science, and safety.1
Corporate Integration of AI: Scope and Scale
Within large organizations like JPMorgan Chase, AI adoption targets core processes across lines of business. The firm moves over $10 trillion daily in more than 120 currencies across 160+ countries and safeguards $35 trillion in assets, creating vast datasets ideal for AI optimization.3 In 2024, JPMorgan Chase extended credit and raised $2.8 trillion for clients, underscoring the scale where AI can enhance risk assessment, transaction processing, and compliance.3
- Risk management and credit decisions: AI models analyze patterns in real-time data to improve lending accuracy, reducing defaults while expanding access.
- Customer interactions: Chatbots and predictive analytics personalize services, handling millions of queries efficiently.
- Operations: Automation streamlines back-office tasks, from reconciliation to fraud detection, freeing resources for innovation.
- Strategic planning: AI-driven forecasting supports decisions on investments and market expansion.
These applications align with broader business trends. J.P. Morgan's 2025 Business Leaders Outlook reveals 53% of middle-market leaders planning new products or services, often powered by technology like AI, amid 77% reporting rising costs.6,8 Nearly three-quarters (74%) expect revenue increases, with 65% projecting higher profits, indicating AI as a tool for competitive edge.6
Productivity Impacts: Long-Term Projections
AI's productivity boost stems from augmenting human capabilities rather than wholesale replacement. Historical precedents, such as automation in manufacturing, show gains of 20-50% in output per worker in affected sectors. For finance, AI could accelerate this: processing speeds for complex models have improved by orders of magnitude, enabling simulations that once took weeks in hours.
JPMorgan Chase's own trajectory supports this. In prior years, the firm achieved record revenues-$122.9 billion in 2020, yielding $29.1 billion net income-through tech investments alongside disciplined credit practices.1 Extending $2.3 trillion in credit that year highlights operational leverage.1 By 2024, these figures scaled up, reflecting compounded effects of technology adoption.3
| Year |
Revenue (billions USD) |
Net Income (billions USD) |
Capital Raised/Extended (trillions USD) |
| 2020 |
122.9 |
29.1 |
2.3 |
| 2024 |
N/A |
N/A |
2.8 |
Business leaders echo this optimism. In the 2025 U.S. Business Leaders Outlook, 51% plan workforce expansion despite cost pressures, with 71% seeing no recession ahead.6 This mindset shift-65% national economic optimism, up sharply-positions AI as a growth accelerator.6
Sector-Specific Transformations: Healthcare, Materials, and Safety
AI's potential to cure cancers involves advanced diagnostics and drug discovery. Machine learning models identify biomarkers from genomic data with 95%+ accuracy in some studies, accelerating trials that traditionally span 10-15 years to under 5. Protein folding predictions, like those from AI tools, have slashed design times for therapeutics targeting oncology.
New composites emerge from AI-optimized simulations. In materials science, generative models explore 10^6 configurations per day versus manual methods' dozens, yielding alloys with 30-50% improved strength-to-weight ratios for aerospace and automotive uses.
Reducing accidental deaths leverages predictive analytics in autonomous systems and public safety. AI in vehicles processes sensor data to prevent 90% of crashes in controlled tests; traffic management systems cut urban accidents by 20-40% via real-time optimization.
- Cancer cure pathways: AI sifts petabytes of patient data for personalized treatments, boosting survival rates by 15-25% in pilots.
- Composites innovation: Quantum-enhanced AI designs metamaterials for energy efficiency, targeting 10-20% reductions in fuel use.
- Safety enhancements: Predictive maintenance in infrastructure prevents failures, potentially saving 100 000+ lives annually worldwide.
Strategic Tensions in AI Deployment
Despite optimism, tensions arise in implementation. JPMorgan Chase invests heavily in technology, but rising costs affect 77% of businesses.8 Balancing AI scaling with regulatory compliance is key-finance faces stringent rules on algorithmic bias and transparency.
Geopolitical risks compound this. A 2025 letter to Jamie Dimon highlighted underwriting risks tied to Chinese firms like CATL, linked to military and human rights issues, exposing firms to regulatory scrutiny.5 Tariffs, noted in Dimon's 2025 letter, could fuel inflation and slow growth, complicating AI-driven expansions.11
Workforce shifts pose another challenge. While 51% plan hiring, AI automation may displace routine roles, necessitating reskilling. J.P. Morgan's surveys show 37% planning headcount increases, 45% steady, signaling measured adaptation.4
Debates and Objections to AI Optimism
Skeptics question timelines and net benefits. Critics argue productivity paradoxes-like Solow's 1987 observation that computers appeared nowhere in productivity stats until the 1990s-could delay gains. Recent data shows U.S. productivity growth at 2.1% annually post-2020, below historical 2.8%, with AI contributions nascent.
Ethical concerns include data privacy and job losses. In finance, AI errors in credit scoring could exacerbate inequalities. Healthcare AI faces 'black box' issues, where models lack explainability, slowing regulatory approval.
Energy demands counterbalance gains: training large models consumes 1 000 MWh per run, equivalent to 100 households yearly. Scaling to enterprise levels strains grids, with projections of AI adding 10% to global electricity by 2026.
| Concern |
Counterargument |
Evidence |
| Delayed productivity |
Lagged effects common in tech |
Internet boosted GDP 1-2% after 5 years |
| Job displacement |
Net job creation historically |
PCs created 15 million jobs 1980-2000 |
| Energy use |
Efficiency improvements |
Model flops reduced 90% since 2018 |
Economic Context and Business Resilience
2025's environment frames AI's role. Midyear surveys show optimism dipping-65% to 32% national economy confidence-with 25% expecting recession, up from 8%.4 Yet 85% project steady-to-improved performance, with 78% steady/increasing revenues.4
JPMorgan Chase navigates this: 2025 proxy and investor materials emphasize resilience.2,15 Leaders focus on controllables-77% believe they can weather storms via strong teams.10
Why AI's Broad Impact Matters
AI's enterprise-wide integration drives competitive differentiation. Firms adopting early capture 15-20% market share gains, per sector analyses. Productivity surges could add 1-3% to global GDP annually by 2030, lifting all functions.
Societal outcomes amplify stakes. Curing cancers addresses $1 trillion yearly global costs; advanced composites enable sustainable transport, cutting emissions 10-15%; safety reductions save lives and $500 billion in damages.
For leaders like those at JPMorgan Chase, AI represents not just tools but a paradigm shift. With 60% industry optimism and 75% company confidence, the path forward prioritizes strategic deployment amid uncertainties.6 This positions AI as central to sustained growth and innovation in a dynamic landscape.
References
1. Chairman and CEO Letter to Shareholders - Annual Report 2025 - April 6, 2026 - https://www.jpmorganchase.com/ir/annual-report/2025/ar-ceo-letters
2. From Jamie Dimon: A special message - J.P. Morgan - 2021-04-13 - https://www.jpmorgan.com/insights/investing/investment-trends/from-jamie-dimon-a-special-message
3. [PDF] 2025 Proxy Statement - JPMorgan Chase - 2025-04-07 - https://www.jpmorganchase.com/content/dam/jpmc/jpmorgan-chase-and-co/investor-relations/documents/proxy-statement2025.pdf
4. Jamie Dimon's Letter to Shareholders, Annual Report 2024 - 2025-04-07 - https://www.jpmorganchase.com/ir/annual-report/2024/ar-ceo-letters
5. 2025 Business Leaders Outlook Pulse Survey - J.P. Morgan - 2025-06-25 - https://www.jpmorgan.com/about-us/corporate-news/2025/2025-business-leaders-outlook-pulse-survey
6. Letter to Jamie Dimon (CEO of JPMorgan Chase & Co.) - 2025-04-17 - http://chinaselectcommittee.house.gov/media/letters/letter-to-jamie-dimon-ceo-of-jpmorgan-chase-co
7. U.S. 2025 Business Leaders Outlook Report - J.P. Morgan - 2025-01-07 - https://www.jpmorgan.com/insights/markets-and-economy/business-leaders-outlook/2025-us-business-leaders-outlook
8. Chase CEO Jamie Dimon Tackles Tariffs and More in Annual Letter - 2025-04-10 - https://thefinancialbrand.com/news/banking-trends-strategies/chase-ceo-jamie-dimon-tackles-tariffs-and-more-in-annual-letter-188323
9. [PDF] 2025 U.S. Business Leaders Outlook - J.P. Morgan - https://www.jpmorgan.com/content/dam/jpmorgan/documents/cb/insights/outlook/business-leaders-outlook/cb-insights-business-leaders-outlook-2025-us.pdf
10. [PDF] Dear Fellow Shareholders, | JPMorgan Chase - 2025-04-07 - https://www.jpmorganchase.com/content/dam/jpmc/jpmorgan-chase-and-co/investor-relations/documents/ceo-letter-to-shareholders-2024.pdf
11. 2025 Business Leaders Outlook: Preparing for action in uncertainty - 2025-01-22 - https://www.chase.com/business/knowledge-center/manage/blo-2025
12. Tariffs will fuel inflation and slow growth, Dimon says - Axios - 2025-04-07 - https://www.axios.com/2025/04/07/jamie-dimon-annual-letter-2025
13. 2025 Midyear Business Leaders Outlook Pulse - Chase Bank - https://www.chase.com/business/knowledge-center/manage/blo-pulse-25
14. Annual Report | JPMorganChase - https://www.jpmorganchase.com/ir/annual-report
15. 2025 - JPMorgan Chase - https://www.jpmorganchase.com/newsroom/press-releases/2025
16. [PDF] Full Investor Day 2025 Presentation - JPMorgan Chase - 2025-04-01 - https://www.jpmorganchase.com/content/dam/jpmc/jpmorgan-chase-and-co/investor-relations/documents/events/2025/jpmc-2025-investor-day/full-presentation.pdf

|
| |
| |
"Stochastic describes processes, systems, or variables that are governed by random probability and uncertainty rather than a single fixed outcome. It is a fundamental concept across mathematics, finance, and computer science used to model real-world phenomena." - Stochastic
In mathematics, finance, computer science, and artificial intelligence, stochastic refers to processes, systems, or variables influenced by randomness and probability, contrasting sharply with deterministic models where outcomes are precisely predictable from given inputs1,2. Unlike deterministic environments, where the same initial conditions and actions always yield identical results, stochastic ones incorporate uncertainty, partial observability, and unpredictable variations, making them essential for modelling real-world complexities such as stock market fluctuations or biological signalling1,3.
Stochastic models produce a range of possible outcomes rather than a single fixed result, allowing for the analysis of probabilistic patterns while acknowledging inherent unpredictability2,4. Key characteristics include unpredictability due to random events, the need for probabilistic techniques to estimate outcomes, and applicability in scenarios with noise, incomplete information, or dynamic variability1. For instance, in AI, a stochastic environment like the stock market involves price movements driven by unpredictable factors, requiring decisions based on risk assessments and expected utilities1. In systems biology, stochastic approaches capture fluctuations from low molecule counts or nonlinear reactions, which deterministic models overlook3.
To illustrate the distinction:
| Aspect |
Deterministic |
Stochastic |
| Predictability |
Outcomes completely predictable |
Outcomes uncertain and variable |
| Modelling |
Simpler, no uncertainty |
Complex, incorporates probability |
| Examples |
Rubik's Cube solving |
Stock market trading |
This table highlights core differences, with stochastic models excelling in handling real-world 'noise' despite greater analytical complexity1,2.
The preeminent theorist associated with stochastic processes in a strategic context is **John von Neumann**, whose pioneering work laid foundational stones for game theory and probabilistic modelling, directly influencing strategic decision-making under uncertainty. Born in 1903 in Budapest, Hungary, to a wealthy Jewish family, von Neumann displayed prodigious talent from childhood, earning doctoral degrees in mathematics and chemical engineering from the University of Budapest by age 22. He emigrated to the United States in 1930, joining Princeton University and later the Institute for Advanced Study.
Von Neumann's relationship to the stochastic concept stems from his co-development of game theory with Oskar Morgenstern in their 1944 book Theory of Games and Economic Behaviour, which introduced mixed strategies-randomised actions to prevent predictability in zero-sum games, embodying stochastic principles1. This addressed strategic uncertainty in competitive environments, where deterministic pure strategies fail against rational opponents. His work extended to stochastic processes in computing and economics, including the von Neumann architecture for computers, which underpins Monte Carlo methods for simulating probabilistic systems. During World War II, he contributed to the Manhattan Project, applying probabilistic models to nuclear explosion simulations. Von Neumann's biography reflects a polymath genius: he authored over 150 papers across pure mathematics, quantum mechanics, functional analysis, and economics, while advising on policy, including the US nuclear strategy. His stochastic insights in game theory revolutionised operations research and AI, enabling robust strategies in stochastic environments like military planning and finance1. Von Neumann died in 1957 from cancer, but his legacy endures in strategic theory, where stochastic modelling remains vital for navigating uncertainty.
References
1. https://www.geeksforgeeks.org/artificial-intelligence/deterministic-vs-stochastic-environment-in-ai/
2. https://blog.ev.uk/stochastic-vs-deterministic-models-understand-the-pros-and-cons
3. https://pmc.ncbi.nlm.nih.gov/articles/PMC5005346/
4. http://www.dodccrp.org/events/7th_ICCRTS/Tracks/pdf/076.PDF
5. https://www.youtube.com/watch?v=7uaQX76e4EI

|
| |
| |
"LLM Knowledge Bases - Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest... You rarely ever write or edit the wiki manually, it's the domain of the LLM." - Andrej Karpathy - Previously Director of AI at Tesla, founding team at OpenAI, PhD at Stanford
The traditional model of knowledge management-where researchers manually write, edit, and maintain wikis and reference systems-assumes that human curation is the primary value-add in organizing information. This assumption is collapsing. As large language models become capable of synthesizing, organizing, and updating information at scale, the bottleneck in knowledge work is shifting from content creation to content validation and strategic direction-setting.1
The Automation of Knowledge Curation
Andrej Karpathy's observation about using LLMs to build personal knowledge bases reflects a fundamental change in how researchers interact with information systems.1 Rather than researchers serving as the primary authors and editors of their knowledge repositories, LLMs now function as the active agents in knowledge synthesis, with humans adopting a supervisory role. This inversion-where the LLM becomes the domain of the wiki and humans become the validators-represents a departure from decades of knowledge management practice.
The practical implication is significant: researchers can now maintain comprehensive, up-to-date knowledge bases across multiple domains of interest without the time investment traditionally required for manual curation. An LLM can continuously aggregate new research, synthesize findings, identify connections across disparate sources, and organize information according to specified schemas-all without human intervention in the day-to-day maintenance cycle.
Context: The Broader Transformation of Knowledge Work
Karpathy's commentary arrives amid a broader recalibration of how AI is reshaping professional work. In early 2025, he articulated a vision of "Software 3.0," where natural language becomes the primary programming interface and LLMs generate code with minimal human input.2 The knowledge base observation extends this logic: if LLMs can generate functional code from high-level specifications, they can equally generate and maintain structured knowledge from domain parameters and update directives.
This shift reflects Karpathy's firsthand experience across multiple roles:
- As a founding member of OpenAI, he witnessed the emergence of increasingly capable language models
- As Director of AI at Tesla (2017-2022), he led teams managing vast datasets and neural network training pipelines, where information organization at scale was operationally critical3
- Upon returning to OpenAI in February 2023, he contributed to the development of GPT-4, which demonstrated substantially improved reasoning and synthesis capabilities4
His observation about LLM-driven knowledge bases is not theoretical speculation but a reflection of practical experimentation with tools that have reached a capability threshold where they can reliably perform knowledge synthesis tasks.
The Capability Threshold: Why Now?
LLMs have long been capable of generating text. What has changed is their ability to maintain consistency, follow complex organizational schemas, and integrate new information without degrading existing knowledge structures. Earlier language models could produce plausible-sounding content but lacked the coherence and reliability required for mission-critical knowledge systems. Current models demonstrate sufficient consistency and reasoning capability to serve as the primary authoring layer in knowledge management systems.
The shift also reflects improved prompt engineering and system design. Rather than asking an LLM to write a wiki article once, researchers can now:
- Define a knowledge base schema and update protocols
- Feed the LLM new research papers, data, or domain updates
- Allow the LLM to integrate new information into existing structures
- Reserve human effort for validation, strategic direction, and exception handling
This represents a qualitative change in the human-AI division of labor within knowledge work.
The Validation Problem and Human Oversight
Karpathy's framing-"you rarely ever write or edit the wiki manually"-does not imply that human oversight becomes unnecessary. Rather, it suggests that human effort shifts from content generation to content validation and strategic curation. A researcher using an LLM-driven knowledge base must still:
- Verify factual accuracy of synthesized information
- Identify and correct hallucinations or misinterpretations
- Ensure the knowledge base reflects current understanding in the field
- Make strategic decisions about what information to prioritize or exclude
The time savings come from eliminating the mechanical work of writing and organizing, not from eliminating judgment. In fact, this model may increase the proportion of time researchers spend on higher-order validation and strategic thinking, even if total time investment decreases.
Implications for Research Velocity and Knowledge Accessibility
If researchers can maintain comprehensive, current knowledge bases with minimal manual effort, several downstream effects become possible:
- Faster literature synthesis: New researchers entering a field can access organized, synthesized knowledge rather than conducting manual literature reviews
- Cross-domain pattern recognition: LLMs can identify connections across knowledge bases in different domains, potentially surfacing insights that siloed manual curation would miss
- Reduced knowledge decay: Knowledge bases maintained manually often become outdated as researchers move to new projects. LLM-driven systems can be continuously updated with minimal friction
- Scalability of expertise: A single researcher can maintain knowledge bases across multiple domains of interest, rather than specializing narrowly
These effects compound over time. As knowledge bases become more comprehensive and current, their value as research tools increases, creating incentives for broader adoption and integration into research workflows.
The Broader Pattern: From Execution to Direction
Karpathy's observation about knowledge bases fits within a larger pattern he has articulated about the transformation of knowledge work under AI. In 2025, he described developers increasingly functioning as "virtual managers" overseeing AI collaborators, focusing on architecture and decomposition rather than syntax.2 The same logic applies to researchers: they become directors of knowledge synthesis rather than executors of knowledge curation.
This mirrors his earlier reflection that "the profession is being dramatically refactored as the bits contributed by the programmer are increasingly sparse and between," with the potential for individuals to become "10X more powerful" by leveraging AI as a collaborator rather than a tool.2 The knowledge base example demonstrates this principle in practice: a researcher directing an LLM to maintain and synthesize a knowledge base can cover more intellectual ground than one manually curating information.
By March 2026, Karpathy had extended this observation further, noting that coding agents had undergone a discontinuous capability jump-"basically didn't work before December and basically work since."5 The implication is that similar discontinuities may occur in other domains, including knowledge management, as LLMs cross capability thresholds that make them reliable collaborators rather than experimental tools.
Strategic Considerations for Knowledge-Intensive Organizations
The normalization of LLM-driven knowledge bases has implications for how organizations structure research, documentation, and institutional knowledge:
- Knowledge infrastructure: Organizations may need to invest in systems that integrate LLMs into knowledge management workflows rather than treating LLMs as external tools
- Validation frameworks: As LLMs become primary knowledge authors, organizations need robust processes for validating and correcting synthesized information
- Researcher skill evolution: Researchers will need to develop competency in directing LLMs, defining knowledge schemas, and validating synthesis-skills distinct from traditional research training
- Knowledge accessibility: LLM-maintained knowledge bases can be queried and synthesized in natural language, potentially democratizing access to domain expertise
The transition from manual to LLM-driven knowledge curation is not merely a productivity improvement. It represents a fundamental shift in how knowledge work is organized, who performs which tasks, and what skills are required to operate effectively in knowledge-intensive domains.
References
1. https://x.com/karpathy/status/2039805659525644595?s=20 - https://x.com/karpathy/status/2039805659525644595?s=20
2. Quote: Andre Karpathy | Quantified Strategy Consulting - 2026-01-21 - https://globaladvisors.biz/2026/01/21/quote-andre-karpathy/
3. Andrej Karpathy - https://karpathy.ai
4. The Professional Journey of Andrej Karpathy - Perplexity - 2024-12-02 - https://www.perplexity.ai/page/the-professional-journey-of-an-OvR1nmNIQNS5gJPAtPMk5w
5. Andrej Karpathy on Code Agents, AutoResearch, and the Loopy Era ... - 2026-03-20 - https://www.youtube.com/watch?v=kwSVtQ7dziU
6. Tesla's Former AI Director Andrej Karpathy who said he feels behind ... - 2026-02-28 - https://timesofindia.indiatimes.com/technology/tech-news/teslas-former-ai-director-andrej-karpathy-who-said-he-feels-behind-as-programmer-now-says-software-programming-has-changed-due-to-/articleshow/128849256.cms
7. Andrej Karpathy: Architect of an AI Revolution - Klover.ai - 2025-06-12 - https://www.klover.ai/andrej-karpathy/
8. Andrej Karpathy — AGI is still a decade away - Dwarkesh Podcast - 2025-10-17 - https://www.dwarkesh.com/p/andrej-karpathy
9. OpenAI cofounder says he hasn't written a line of code in ... - Fortune - 2026-03-21 - https://fortune.com/2026/03/21/andrej-karpathy-openai-cofounder-ai-agents-coding-state-of-psychosis-openclaw/
10. Andrej Karpathy: Tesla AI, Self-Driving, Optimus, Aliens, and AGI - 2022-10-29 - https://www.youtube.com/watch?v=cdiD-9MMpb0
11. Andrej Karpathy – It will take a decade to work through the issues ... - 2025-10-17 - https://news.ycombinator.com/item?id=45619329
12. Andrej Karpathy talks meaning of life and leaving Tesla with Lex ... - 2022-10-29 - https://www.teslarati.com/andrej-karpathy-tesla-lex-fridman/
13. Andrej Karpathy Academic Website - Stanford Computer Science - https://cs.stanford.edu/people/karpathy/
14. No Priors Ep. 80 | With Andrej Karpathy from OpenAI and Tesla - 2024-09-05 - https://www.youtube.com/watch?v=hM_h0UA7upI
15. Fave Tweets - Andrej Karpathy - https://karpathy.ai/tweets.html
16. A Survival Guide to a PhD - Andrej Karpathy blog - 2016-09-07 - http://karpathy.github.io/2016/09/07/phd/

|
| |
|