| |
|
Our selection of the top business news sources on the web.
AM edition. Issue number 1266
Latest 10 stories. Click the button for more.
|
| |
"A new scientific truth does not generally triumph by persuading its opponents and getting them to admit their errors, but rather by its opponents gradually dying out and giving way to a new generation that is raised on it." - Max Planck - Nobel laureate
Max Planck's famous statement captures a fundamental truth about the nature of scientific advancement: paradigms shift not through debate alone, but through the inexorable passage of time and generational change. This observation, drawn from his personal experiences, has become known as Planck's Principle and resonates deeply in the philosophy of science1,2.
The Man Behind the Words: Max Planck's Life and Legacy
Born in 1858 in Kiel, Germany, Max Karl Ernst Ludwig Planck was a pioneering theoretical physicist who fundamentally transformed our understanding of the physical world. Educated at the universities of Munich and Berlin, he initially pursued classical thermodynamics before making his revolutionary breakthrough. In 1900, Planck introduced the concept of energy quanta to resolve discrepancies in black-body radiation, laying the foundation for quantum theory-a radical departure from classical physics that earned him the Nobel Prize in Physics in 19181,2.
Planck's career was marked by profound challenges. His quantum hypothesis faced fierce opposition from established scientists who clung to classical theories. Despite providing rigorous theoretical proofs, Planck struggled to gain widespread acceptance, a frustration he later reflected upon candidly. He served as president of the Kaiser Wilhelm Society (predecessor to the Max Planck Society) from 1926 to 1937 and navigated the moral complexities of Nazi Germany, including the loss of his son to execution on false treason charges. Planck died in 1947, leaving an indelible mark on modern physics1,3.
The Context and Origin of the Quote
The quote originates from Planck's Scientific Autobiography, published posthumously in German in 1948 and translated into English in 1949. Writing in his later years, Planck recounted the 'painful experiences' of promoting his quantum ideas: 'It is one of the most painful experiences of my entire scientific life that I have but seldom… succeeded in gaining universal recognition for a new result, the truth of which I could demonstrate by a conclusive, albeit only theoretical proof.' He then articulated the principle as a 'remarkable fact'1,3.
A slightly longer version appears on pages 33 and 97: 'An important scientific innovation rarely makes its way by gradually winning over and converting its opponents: it rarely happens that Saul becomes Paul. What does happen is that its opponents gradually die out, and that the growing generation is familiarised with the ideas from the beginning.' This reflects his view of science as an evolutionary process governed by human biology-death and renewal-rather than mere persuasion2.
Though cited in Advances in Biochemical Psychopharmacology (1980), the quote's primary source is Planck's autobiography. It has been paraphrased colloquially as 'Science progresses one funeral at a time,' a concise version popularised by economist Paul Samuelson in the 1960s, who credited Planck while introducing the vivid phrasing3.
Planck's Principle in the Philosophy of Science
Scholars have interpreted the statement in multiple ways. In sociology of scientific knowledge, it underscores that change occurs via generational turnover, not individual conversions2. Some see it as highlighting age-related stubbornness in science, contrasting with Karl Popper's emphasis on falsifiability. Others view it as a truism about time's role in validating enduring truths, as new ideas persist while flawed ones fade1.
A 2023 study empirically supported Planck, finding that citations of new theories increase significantly after the deaths of prominent opponents, confirming science advances 'one funeral at a time'5.
Leading Theorists on Scientific Change
- Thomas S. Kuhn (1922-1996): In his seminal 1962 book The Structure of Scientific Revolutions, Kuhn cited Planck directly, popularising the idea of paradigm shifts-periods of 'normal science' punctuated by revolutions where old frameworks resist until supplanted. Kuhn argued that scientists cling to paradigms until anomalies force change, aligning with Planck's generational mechanism3.
- Karl Popper (1902-1994): Popper's philosophy of falsifiability emphasised testable predictions and bold conjectures, contrasting Planck's view by focusing on rational critique over demographic inevitability. Yet both highlight resistance to novelty1.
- Paul A. Samuelson (1915-2009): The Nobel-winning economist adapted Planck's idea to economics, noting in his textbook that new doctrines prevail 'funeral by funeral,' influencing broader discussions on intellectual progress3.
Planck's words remind us that innovation in science, and indeed all fields of knowledge, demands patience. True progress endures beyond lifetimes, outlasting opposition through education and time.
References
1. https://buyscience.wordpress.com/history-of-science/plancks-principle/
2. https://en.wikipedia.org/wiki/Planck's_principle
3. https://quoteinvestigator.com/2017/09/25/progress/
4. https://insertphilosophyhere.com/science-its-tricky/
5. https://www.chemistryworld.com/news/science-really-does-advance-one-funeral-at-a-time-study-suggests/3010961.article
6. https://www.ophthalmologytimes.com/view/moving-forward-does-science-progress-one-funeral-at-a-time-

|
| |
| |
"Platform strategy refers to the acquisition of a core ("platform") company that serves as the foundation for subsequent bolt-on acquisitions, with the objective of creating value through scale, scope, and capability enhancement." - Platform strategy
A platform strategy is a structured acquisition approach where a private equity firm purchases a foundational company and subsequently acquires complementary businesses to create value through scale, operational synergies, and capability enhancement.
Core Definition and Mechanics
The platform strategy operates on a "buy and build" model. A private equity group identifies and acquires an initial platform company-typically a well-established business with EBITDA above £5-10 million, professional systems, experienced management, and significant market presence. This platform then serves as the anchor for subsequent acquisitions of smaller, related businesses known as "add-ons" or "bolt-ons." Over a typical investment period of 3-7 years, the platform consolidates multiple acquisitions, increasing in value before being sold to exit investors at a substantially higher valuation.
Strategic Rationale
Private equity firms favour platform strategies because they unlock rapid value creation, particularly in fragmented markets where no single dominant player exists. By consolidating smaller businesses under a strong platform, investors capture market share, drive operational improvements, and realise scale efficiencies that would be difficult for individual companies to achieve independently.
The approach also provides strategic entry points into new industries or geographies. For example, a PE firm might acquire a well-established regional healthcare provider, then use it as a base to expand across neighbouring markets through targeted acquisitions that fulfil specific strategic needs.
Value Creation Mechanisms
Platform strategies generate value through multiple channels:
- Shared Infrastructure: Consolidating functions such as HR, IT, legal, and finance across portfolio companies eliminates redundancies and reduces costs.
- Bulk Purchasing Power: Centralised vendor negotiations and bulk purchasing of software, materials, and services reduce per-unit costs significantly.
- Standardised Technology: A unified technology stack improves data visibility and operational efficiency across all portfolio companies.
- Cross-Company Learning: Insights and best practices from one company directly benefit others, accelerating growth across the portfolio.
- Operational Playbooks: Standardised business processes and procedures reduce trial-and-error inefficiencies and enable faster integration of add-ons.
Unlike traditional private equity scaling methods that rely on quick operational fixes or aggressive cost-cutting, platform strategies emphasise sustainable, long-term value creation. Companies operating under a PE platform strategy grow faster and exit at higher valuations because they are structured for enduring success rather than short-term gains.
Platform Company Characteristics
Successful platform companies typically possess:
- A strong, experienced management team with proven track records in the target industry
- Well-defined operational systems and repeatable processes
- Sufficient scale and capitalisation to support add-on acquisitions
- Positive cash flows and demonstrated growth potential
- Leadership capable of integrating new businesses effectively
Add-On Acquisition Strategy
Add-ons are selected strategically to fulfil specific operational or market needs. For instance, if the platform is a medical services company, a PE firm might acquire a manufacturer of medical equipment parts to eliminate external purchasing costs and create opportunities for further expansion. This strategic fit approach reduces risk and accelerates value creation compared to opportunistic acquisitions.
Value Realisation
Sustainable gains in platform private equity come from balancing organic initiatives-such as process improvement and leadership development-with inorganic expansion through targeted acquisitions. Key performance indicators including revenue growth, EBITDA improvement, and integration milestones measure progress. Case studies demonstrate that platforms executing thoughtful bolt-on strategies often double their enterprise value within several years.
Related Strategy Theorist: Henry Kravis
Biography and Contribution
Henry Roberts Kravis (born 1944) is an American financier and co-founder of Kohlberg Kravis Roberts & Co. (KKR), one of the world's most influential private equity firms. Born in Tulsa, Oklahoma, Kravis studied economics at Claremont McKenna College before earning an MBA from Columbia Business School. His career in finance began at Bear Stearns, where he worked under Jerome Kohlberg Jr., a pioneering figure in leveraged buyouts.
In 1976, Kravis and his cousin George Roberts founded KKR with Kohlberg, establishing what would become a transformative force in private equity. Throughout the 1980s and 1990s, KKR pioneered aggressive acquisition strategies, most famously the £24 billion leveraged buyout of RJR Nabisco in 1989-the largest LBO of its era. This transaction, detailed in the book "Barbarians at the Gate," exemplified the bold, transformative approach that defined Kravis's investment philosophy.
Relationship to Platform Strategy
Whilst Kravis is primarily known for pioneering leveraged buyouts and aggressive financial engineering, his strategic vision fundamentally shaped the evolution toward platform-based acquisition strategies. KKR's approach to portfolio management-building operational capabilities, integrating acquired companies, and creating synergies across holdings-established foundational principles that underpin modern platform strategies.
Kravis recognised early that sustainable value creation required more than financial leverage; it demanded operational excellence, strategic consolidation, and the ability to integrate disparate businesses into cohesive, high-performing entities. This philosophy directly influenced how contemporary private equity firms structure platform investments. By emphasising management quality, operational integration, and long-term value creation over pure financial arbitrage, Kravis's legacy shaped the transition from purely financial engineering to the strategic, operationally-focused platform strategies that dominate private equity today.
Under Kravis's leadership, KKR evolved from a leveraged buyout specialist into a diversified investment firm managing over £500 billion in assets globally. His emphasis on building world-class management teams and creating operational synergies across portfolio companies established the template that modern platform strategies follow. Today, platform strategies represent the maturation of principles Kravis championed: that private equity value creation stems from strategic consolidation, operational improvement, and the systematic integration of complementary businesses-not merely from financial leverage alone.
References
1. https://azariangrowthagency.com/private-equity-platform-strategy/
2. https://www.midstreet.com/blog/what-is-a-platform-in-private-equity
3. https://alignediq.com/private-equity-platform-investments/
4. https://www.batonmarket.com/resources/own/acquisition-platform
5. https://corporatefinanceinstitute.com/resources/valuation/platform-company/
6. https://symmetricaladvisory.com/private-equity-101-new-platforms-vs-add-ons/
7. https://dealroom.net/blog/what-is-a-private-equity-roll-up-strategy
8. https://www.bain.com/insights/solution-spotlight/platform-strategy/

|
| |
| |
"A man to whom it has been given to bless the world with a great creative idea has no need for the praise of posterity. His very achievement has already conferred a higher boon upon him." - Albert Einstein - Nobel Laureate
In 1948, Albert Einstein penned these words as a heartfelt tribute in "Max Planck in Memoriam," honouring the German physicist whose revolutionary ideas laid the foundation for quantum mechanics. The quote encapsulates Einstein's admiration for Max Planck, whom he regarded not merely as a colleague but as a towering figure whose creative insight transformed our understanding of the universe. Delivered in the shadow of World War II and amid the post-war reconstruction of science, this reflection underscores a timeless truth: true genius finds its reward in the idea itself, transcending the need for later acclaim.
The Context of the Quote
Max Planck passed away on 4 October 1947 at the age of 89, having endured personal tragedies including the loss of his first wife, two daughters, and two sons-one executed by the Nazis for his alleged involvement in the plot to assassinate Hitler. Einstein's memorial, published in 1948, was part of a collection celebrating Planck's life and work. At this time, Einstein, himself a Nobel Laureate in 1921 for his explanation of the photoelectric effect, was in exile in the United States, reflecting on the giants who shaped modern physics. The quote emerges from Einstein's deep respect for Planck's humility and the profound impact of his 1900 discovery of energy quanta, which challenged classical physics and birthed quantum theory1,2,5.
Max Planck: The Man and His Monumental Achievement
Born in 1858 in Kiel, Germany, Max Planck initially pursued a career in thermodynamics, influenced by the second law and the works of Rudolf Clausius. By 1900, as professor at the University of Berlin, he grappled with the "black-body radiation" problem: classical theory predicted infinite energy at high frequencies (the "ultraviolet catastrophe"), clashing with experiments. Planck resolved this by proposing that energy is emitted in discrete packets, or "quanta," introducing his constant h in the formula E = hf, where f is frequency. This act, described by Einstein as the basis of twentieth-century physics, was not immediately embraced by Planck himself, who viewed it as a mathematical fix rather than a physical reality2,5,8.
Planck's quantum hypothesis paved the way for Einstein's 1905 paper on the photoelectric effect, where light too behaves as particles (photons), earning Einstein his Nobel. Planck championed relativity, calling Einstein the "Copernicus of the twentieth century," and defended scientific truth amid political turmoil, remaining in Germany through both world wars2. His philosophy emphasised that scientific truth triumphs not by persuasion but through generational change, as opponents fade away6. Einstein praised Planck's perseverance in seeking nature's "pre-established harmony"9.
Albert Einstein: The Philosopher-Physicist
Einstein (1879-1955), born in Ulm, Germany, revolutionised physics with special relativity (1905), general relativity (1915), and contributions to quantum theory, though he later critiqued its probabilistic nature, famously debating Planck and others on reality's foundations1,3,4. His philosophy blended intuition, simplicity, and mathematical elegance: "purely mathematical construction enables us to find those concepts... that provide the key to the understanding of natural phenomena"1. Einstein viewed Planck as a rare "mansion" in the "temple of science," driven by pure curiosity rather than utility or fame7. Their correspondence and mutual respect highlight a shared belief in science as a pursuit of profound order3.
Leading Theorists and the Dawn of Quantum Theory
The quote's themes of creativity and achievement resonate with quantum pioneers:
- Niels Bohr: Developed the atomic model incorporating quanta, founding complementarity to reconcile wave-particle duality.
- Werner Heisenberg: Formulated matrix mechanics and the uncertainty principle, shifting physics to probabilistic interpretations.
- Erwin Schrödinger: Introduced wave mechanics, equivalent to Heisenberg's, leading to the unified quantum formalism.
- Ludwig Boltzmann: Precursor via statistical mechanics; his entropy work influenced Planck's quantum leap2.
- Ernst Mach and Wilhelm Ostwald: Positivists Einstein credited Planck with overcoming, proving atoms' reality through Brownian motion1.
These figures, building on Planck's foundation, reshaped reality's depiction, echoing Einstein's conviction that great ideas bestow immortality on their creators1,2.
Enduring Relevance
Today, Planck's constant underpins technologies from lasers to semiconductors, while Einstein's vision reminds us that science's highest rewards lie in discovery itself. This tribute bridges personal loss, scientific revolution, and philosophical depth, inspiring generations to pursue ideas that bless the world.
References
1. https://plato.stanford.edu/entries/einstein-philscience/
2. https://todayinsci.com/P/Planck_Max/PlanckMax-Quotations.htm
3. https://en.wikiquote.org/wiki/Albert_Einstein
4. https://www.informationphilosopher.com/solutions/scientists/einstein/dialectica.html
5. https://www.spaceandmotion.com/Albert-Einstein-Quotes.htm
6. https://www.azquotes.com/author/11714-Max_Planck
7. https://www.goodreads.com/quotes/1128366-in-the-temple-of-science-are-many-mansions-and-various
8. https://www.quotescosmos.com/quotes/Max-Planck-quote-8.html
9. https://www.site.uottawa.ca/~yymao/misc/Einstein_PlanckBirthday.html

|
| |
| |
"I think we've just reinvented the computer." - Jensen Huang - Nvidia CEO
In a profound reflection on the evolution of computing, NVIDIA CEO Jensen Huang articulated a paradigm shift during his interview on the Lex Fridman Podcast #494, stating, "I think we've just reinvented the computer." This remark, made in the context of advanced AI systems, underscores how modern computing has transitioned from mere data retrieval to generative intelligence capable of research, tool usage, and synthetic data creation.1,2
Context of the Quote
Huang's statement emerged from a discussion on the architecture of future AI agents. He reasoned that these systems require access to ground truth data via file systems, the ability to conduct research, and integration with input/output subsystems and tools. This holistic view reveals computing's "deeply profound" implications, marking a reinvention where AI evolves beyond passive storage into active, context-aware generation.2 Delivered on 23 March 2026, amid NVIDIA's ascent to a $4 trillion valuation, the quote captures the explosive growth of the 'token economy' - where AI produces 'token goods' like generated text, images, and code, turning computers from cost centres (akin to unprofitable warehouses) into revenue-generating factories.1
Backstory on Jensen Huang
Born in Taiwan in 1963, Jensen Huang co-founded NVIDIA in 1993 with Chris Malachowsky and Curtis Priem, initially focusing on graphics processing units (GPUs) for gaming and visualisation. Facing near-bankruptcy in the late 1990s, Huang pivoted NVIDIA towards programmable shaders and IEEE-compliant FP32 floating-point precision, enabling GPUs for general-purpose computing.2 The launch of CUDA in 2006 democratised this power, placing supercomputing capabilities in researchers' hands via PCs, outpacing rivals like OpenCL due to NVIDIA's massive install base.2 Under Huang's leadership, NVIDIA dominated AI hardware, powering breakthroughs from deep learning to large language models. By 2026, as CEO of the world's most valuable company, Huang envisions computing's GDP share surging 100-fold, with AI achieving artificial general intelligence (AGI) today - defined as systems autonomously building profitable applications.1,2
Leading Theorists in AI and Computing Reinvention
John McCarthy (1927-2011): Coined 'artificial intelligence' in 1956 at the Dartmouth Conference, pioneering Lisp and time-sharing systems. His vision of machines reasoning like humans laid foundational theory for AI's shift from rule-based to generative paradigms.2
Geoffrey Hinton: 'Godfather of deep learning', Hinton's backpropagation and neural network research in the 1980s, revitalised in 2012 via AlexNet (powered by NVIDIA GPUs), enabled the scaled training underpinning today's token-generating models.1
Yann LeCun and Yoshua Bengio: With Hinton, the 'three musketeers' of AI advanced convolutional networks and generative adversarial networks (GANs), theorising self-supervised learning that allows AI to synthesise data - echoing Huang's 'token factory'.1
Ilya Sutskever: Co-founder of OpenAI, his work on sequence transduction and reinforcement learning from human feedback (RLHF) birthed models like GPT, which Huang sees as reinventing computing through tool-augmented agency.2
These theorists' ideas converged with NVIDIA's hardware, propelling Huang's prophecy: every profession - from carpenters to plumbers - will programme via natural language, expanding coders from 30 million to 1 billion.1
Implications for the AI Revolution
Huang predicts AI disruption for task-based roles but empowerment for purpose-driven innovators. Power challenges will be met with 'elegant degradation' data centres utilising grid redundancies. As barriers to AI entry drop to zero - simply ask, 'How do I use you?' - the reinvention promises unprecedented productivity, with trillion-dollar companies commonplace.1
References
1. https://news.futunn.com/en/post/70502748/in-depth-interview-with-jensen-huang-the-token-economy-explosion
2. https://lexfridman.com/jensen-huang-transcript/
3. https://www.youtube.com/watch?v=VWkSgbUkkh8
4. https://lexfridman.com/category/transcripts/
5. https://guardianbookshop.com/the-thinking-machine-9781847928276/
6. https://exclusivebooks.co.za/collections/new?page=79

|
| |
| |
"TurboQuant is not another DeepSeek moment." - FundaAI
The quote “TurboQuant is not another DeepSeek moment” (FundaAI, 26 March 2026) captures a specific market misreading that erupted after Google re-published its TurboQuant blog on 24 March 2026.
Core meaning of the quote
-
What the market thought: TurboQuant was interpreted as a breakthrough that could compress an entire large language model (weights+cache) by ~6×, which would structurally reduce demand for HBM/DRAM/SSD and trigger a valuation reset across the compute stack—hence the “another DeepSeek moment” label (the early-2025 efficiency shock that sank many AI-chip and memory stocks).
-
What TurboQuant actually does: It is only an aggressive, training-free quantization scheme for the inference-time key-value (KV) cache (and, secondarily, for high-dimensional vector search). It reduces KV-cache memory by ~6× and speeds up attention computation by up to 8× on NVIDIA H100s, without touching model weights [page:research.google].
Why the distinction matters (first-principles view)
Thus the “linear extrapolation” that a 6× KV-cache reduction ~ 6× lower total memory demand is wrong.
Technical snapshot of TurboQuant
-
Published: arXiv 28 Apr 2025 (ICLR 2026 poster); Google blog re-surfaced 24 Mar 2026.
-
Two-stage algorithm:
-
PolarQuant: Random rotation - polar-coordinate representation - high-quality scalar quantization (captures most of the vector’s magnitude and direction with minimal overhead).
-
QJL (Quantized Johnson-Lindenstrauss): 1-bit residual correction that yields unbiased inner-product estimates, critical for preserving attention scores/
-
Results: 3-bit compression with zero accuracy loss on LongBench, Needle-In-A-Haystack (100% recall up to 104k tokens), and MMLU/HumanEval; 8× attention-logit speedup on H100.
Market reaction that sparked the quote
-
Stocks: SanDisk down ~8.1%, Micron ~5.8% on the day, as traders priced in a potential structural drop in memory demand.
-
Narrative: “If inference memory can be compressed 6×, the entire HBM/DRAM growth story breaks”—a replay of the DeepSeek efficiency shock.
Why FundaAI calls it “not another DeepSeek moment”
-
Scope limitation: DeepSeek’s 2025 advance was a model-architecture/efficiency breakthrough that reduced training and inference compute per token. TurboQuant only optimizes the inference working set (KV cache).
-
No weight compression: The largest memory consumer in a datacenter (model weights + training datasets) is untouched; total HBM/SSD demand does not reset.
-
Already known work: The algorithm was public for 11-months before Google’s blog; the “breakthrough” framing is largely a re-surfacing, not a new paradigm.
-
Industry trend: KV-cache quantization has been pursued for years (KIVI, etc.); TurboQuant pushes the frontier but does not change the fundamental economics of memory-capacity planning.
Bottom line
The market’s panic was a category error: conflating temporary inference cache with total model memory. TurboQuant is a pure throughput/context-length optimizer that lets existing HBM serve more concurrent users or longer contexts, but it does not compress the LLM itself. Therefore, it should not be modeled as a structural demand-destruciton event for HBM/DRAM/SSD—unlike the genuine “DeepSeek moment” that altered compute-per-token economics across training and inference.
|
| |
| |
"Multiple on Invested Capital (MOIC) measures the total value returned from an investment relative to the total equity capital invested, expressed as a simple multiple (e.g., 2.5×). In private equity, MOIC captures the absolute value creation of an investment without regard to the time taken to achieve it." - Multiple on Invested Capital (MOIC)
The Multiple on Invested Capital (MOIC) is a financial metric that measures the total value returned from an investment relative to the total equity capital invested, expressed as a simple multiple (for example, 2.5×).1 In private equity, MOIC captures the absolute value creation of an investment without regard to the time taken to achieve it, making it one of the most commonly used performance indicators across the industry.3
Core Definition and Calculation
MOIC answers a fundamental question: how many times has the initial capital been multiplied?3 The metric is calculated using a straightforward formula:
\text = \frac{\text}{\text}
Alternatively expressed as:
\text = \frac{\text}{\text}
The total value of investment includes all cash received from the investment-such as dividends, profits, and eventual sale proceeds-as well as unrealised gains, which represent the potential future value of the investment if sold at current market rates.8
Practical Examples
A MOIC value of 2.0× indicates that a private equity fund has doubled its original investment.3 If a fund invested £1 million and received £3 million from the investment, the fund would have a MOIC of 3.0×.5 In a more substantial scenario, if a private equity fund invests £100 million in a company and realises £500 million in total value (both realised and unrealised), the MOIC would be 5.0×, indicating a fivefold return on the initial capital.8
Key Characteristics and Advantages
MOIC provides several distinct advantages as a performance metric:
- Simplicity and directness: MOIC tells investors in a straightforward manner whether and by how much their original investment has grown.3
- Time-agnostic measurement: Unlike the Internal Rate of Return (IRR), MOIC does not account for the time value of money or the duration of the investment, making it useful as a quick assessment tool.1
- Comprehensive value capture: MOIC includes both realised returns (actual cash distributions) and unrealised gains (current market value of remaining holdings), providing a complete picture of value creation.2
- Comparative analysis: The metric enables investors to compare the performance of different investments and funds on a standardised basis.3
MOIC in Context: Related Metrics
Whilst MOIC is excellent for quickly assessing investment success, it is typically calculated alongside other performance metrics to provide a more holistic understanding:3
- Internal Rate of Return (IRR): Measures returns whilst accounting for the time value of money and the duration of the investment.
- Distributions to Paid-In Capital (DPI): Represents the amount paid out by a fund to investors in relation to their investments, focusing only on realised returns.1
- Total Value to Paid-In (TVPI): Similar to MOIC but measures total capital actually paid in over time, including follow-on investments, rather than just initial capital.9
- Public Market Equivalent (PME): Compares private equity returns to equivalent public market investments.3
Interpreting MOIC Performance
A higher MOIC is perceived positively because it implies that investments are profitable and have generated substantial value.4 Conversely, a lower MOIC is viewed negatively, as it indicates that the investment may be unprofitable and investors risk not receiving their target return or even recouping their initial capital.4 In private equity practice, a MOIC of 2.0× or above is generally considered a strong outcome, though expectations vary by fund strategy and market conditions.
Terminology and Variations
The term MOIC is interchangeable with several other expressions commonly used in investment circles:4 "Multiple on Money" (MoM) and "cash-on-cash return" are synonymous terms that describe the same metric. This terminology consistency reflects the widespread adoption of MOIC across venture capital, private equity, and hedge fund sectors.6
David Rubenstein and the Professionalisation of Private Equity Metrics
The systematic use of MOIC and other standardised performance metrics in private equity owes much to David Rubenstein, co-founder of The Carlyle Group, who has been instrumental in professionalising the private equity industry since the 1980s. Rubenstein recognised that private equity required transparent, comparable metrics to attract institutional capital and build credibility with limited partners.
Born in 1949, Rubenstein earned his undergraduate degree from Duke University and his law degree from the University of Chicago. After working as a lawyer and in the White House during the Carter administration, he co-founded Carlyle in 1987 with William E. Conway Jr. and Daniel A. D'Aniello. At a time when private equity was largely opaque and driven by informal relationships, Rubenstein championed the adoption of standardised reporting metrics, including MOIC, IRR, and DPI, which became industry benchmarks.
Rubenstein's advocacy for transparency and rigorous performance measurement transformed private equity from a relatively closed industry into one that could attract substantial institutional investment from pension funds, endowments, and sovereign wealth funds. His emphasis on clear, quantifiable metrics like MOIC enabled investors to compare fund performance objectively and hold managers accountable for value creation. Under his leadership, Carlyle grew to become one of the world's largest private equity firms, managing over £300 billion in assets, and his influence on industry standards remains profound. Rubenstein's belief that "you can't manage what you don't measure" became a guiding principle for the entire private equity sector, making MOIC and related metrics central to how the industry evaluates and communicates investment success.
References
1. https://eqtgroup.com/en/thinq/Education/what-does-moic-mean-in-private-equity
2. https://www.fe.training/free-resources/private-equity/what-is-moic-in-private-equity/
3. https://www.allvuesystems.com/resources/what-is-moic-in-private-equity/
4. https://www.wallstreetprep.com/knowledge/moic-multiple-on-invested-capital/
5. https://www.careerprinciples.com/resources/multiple-on-invested-capital-moic
6. https://carta.com/learn/private-funds/management/fund-performance/moic/
7. https://calebblandlaw.com/blog/what-is-moic-in-the-context-of-private-equity/
8. https://www.financealliance.io/multiple-on-invested-capital-moic/
9. https://waveup.com/blog/understanding-moic-in-private-equity/

|
| |
| |
"It is not the possession of truth, but the success which attends the seeking after it, that enriches the seeker and brings happiness to him." - Max Planck - Nobel laureate
In the chapter 'Is the external world real?' from his 1932 book Where Is Science Going? The Universe in the Light of Modern Physics, Max Planck articulates a timeless philosophy on scientific endeavour. This reflection emerges amid discussions on the nature of reality, the limits of human knowledge, and the relentless drive of scientific inquiry1,2. Planck, a Nobel laureate in Physics, emphasises that true fulfilment lies not in grasping absolute truth - an elusive goal - but in the very act of pursuit, where each discovery enriches the mind and spirit3.
The Life and Legacy of Max Planck
Born in 1858 in Kiel, Germany, Max Karl Ernst Ludwig Planck grew up in a scholarly family during a time of intellectual ferment. He studied physics, mathematics, and philosophy at the universities of Munich and Berlin, earning his doctorate in 1879 under Gustav Kirchhoff and Hermann von Helmholtz. Initially drawn to thermodynamics, Planck's career pivoted dramatically in 1900 when he resolved the 'ultraviolet catastrophe' in black-body radiation. By introducing the concept of energy quanta - discrete packets rather than continuous flow - he laid the cornerstone of quantum theory, revolutionising physics1,2.
Planck received the Nobel Prize in Physics in 1918 for this groundbreaking work. Yet his life was marked by profound personal tragedy: both his first wife and two daughters died in childbirth, and during the Nazi era, his son was executed for alleged involvement in the plot to assassinate Hitler. Despite such losses, Planck remained a steadfast advocate for academic integrity, resisting Nazi interference in science while navigating the regime's pressures1. He directed the Kaiser Wilhelm Society (predecessor to the Max Planck Society) until 1945, embodying resilience and ethical commitment.
The Context of the Quote
Published in 1932, Where Is Science Going? captures Planck's mature reflections on quantum mechanics' upheavals, causality, free will, and science's philosophical boundaries. The quote appears in a meditation on whether the external world exists independently of observation - a question echoing quantum uncertainties. Planck argues that science progresses through imaginative leaps and persistent effort, not flawless logic alone. He likens the researcher's path to a labyrinth, lit by occasional insights amid errors, underscoring that the 'success which attends the seeking' fuels progress and personal growth2,3. This era followed quantum theory's consolidation by figures like Einstein, Bohr, and Heisenberg, prompting Planck to defend classical intuitions while embracing modernity.
Leading Theorists in the Pursuit of Truth in Physics
Planck's ideas resonate with pioneers who shaped the philosophy of scientific truth-seeking:
- Isaac Newton (1643-1727): His Principia Mathematica exemplified methodical pursuit, blending experiment and mathematics to uncover universal laws. Newton viewed science as approximating divine order, much like Planck's quest for underlying forces3.
- Albert Einstein (1879-1955): Planck's 'spiritual heir', Einstein built on quanta with relativity, famously clashing yet collaborating with Planck. He shared the view that imagination precedes knowledge, insisting 'God does not play dice' while pursuing unified theories1,2.
- Niels Bohr (1885-1962): Founder of the Copenhagen interpretation, Bohr emphasised complementarity - wave-particle duality - highlighting science's probabilistic nature. His debates with Einstein mirrored Planck's tension between determinism and uncertainty1.
- Werner Heisenberg (1901-1976): Developer of the uncertainty principle, Heisenberg echoed Planck's quantum origins, stressing that observation shapes reality, aligning with the quote's focus on process over possession2.
- Erwin Schrödinger (1887-1961): His wave equation advanced quantum mechanics; his What is Life? influenced biology, reflecting Planck's holistic view of science bridging physics and philosophy1.
These theorists, connected through Planck's quantum revolution, illustrate that scientific truth emerges from collective, iterative striving - a theme central to the quote. Their legacies affirm Planck's wisdom: the journey itself illuminates and fulfils.
References
1. https://www.goodreads.com/author/quotes/107032.Max_Planck
2. https://en.wikiquote.org/wiki/Max_Planck
3. https://www.deeplook.ir/wp-content/uploads/2016/09/Max_Planck_Where_Is_Science_Going.pdf
4. https://www.goodreads.com/quotes/131973-it-is-not-the-possession-of-truth-but-the-success
5. https://www.whatshouldireadnext.com/quotes/max-planck-it-is-not-the-possession
6. https://www.azquotes.com/author/11714-Max_Planck/tag/science
7. https://todayinsci.com/P/Planck_Max/PlanckMax-Quotations.htm

|
| |
| |
"If your job is the task, then you're very highly [likely] going to be disrupted." - Jensen Huang - Nvidia CEO
Jensen Huang's observation that roles defined primarily by task execution face significant disruption risk represents a critical inflection point in how we understand artificial intelligence's impact on the workforce. This statement, made during his recent appearance on the Lex Fridman Podcast, encapsulates a perspective that has become increasingly central to Huang's public messaging about AI's trajectory-one that distinguishes sharply between the displacement of routine work and the evolution of human capability.
The Context of Huang's Remarks
Huang's statement arrives at a moment of considerable market anxiety regarding AI's disruptive potential. In recent weeks, software stocks have experienced significant pressure, with investors expressing concerns that artificial intelligence tools-particularly large language models like Claude-could render traditional enterprise software platforms obsolete. The iShares Expanded Tech-Software Sector ETF has declined nearly 22% year-to-date, reflecting broader apprehension about technological displacement.1 This market sentiment provided the backdrop for Huang's clarification of what he views as a fundamental misunderstanding about AI's relationship to human work.
What distinguishes Huang's framing is his deliberate parsing of different categories of employment. Rather than offering blanket reassurance that AI poses no threat to jobs, he instead articulates a more granular thesis: the vulnerability of any given role correlates directly with the degree to which that role can be reduced to discrete, repeatable tasks. This represents a more intellectually honest assessment than simple dismissal of disruption concerns, whilst simultaneously offering a pathway for workers and organisations to think strategically about adaptation.
Huang's Broader Vision: AI as Tool User, Not Tool Replacer
This statement must be understood within the context of Huang's larger argument about AI's fundamental nature. He has consistently maintained that markets have fundamentally miscalculated the threat AI poses to software companies, arguing instead that AI will function as an intelligent agent that uses existing software tools rather than replacing them.1 In his view, legacy enterprise platforms such as SAP and ServiceNow will continue to play vital roles because they "exist for a fundamentally good reason."1 AI, in this conception, becomes a layer of intelligence that sits atop existing infrastructure, amplifying human capability rather than rendering it redundant.
However, Huang's acknowledgement that task-based roles face disruption introduces important nuance to this optimistic framing. He is not arguing that AI poses no displacement risk whatsoever. Rather, he is suggesting that the risk is not uniformly distributed across the labour market. Roles that consist primarily of executing defined procedures-whether in software development, data entry, customer service, or routine analysis-face genuine disruption. Conversely, roles that require judgment, creativity, strategic thinking, and human connection remain substantially more resilient.
The Philosophical Underpinnings: Task Versus Purpose
Huang's distinction between task-based and purpose-driven work echoes themes that have emerged across technology leadership in recent months. At Nvidia itself, Huang has been notably aggressive in pushing employees to adopt AI tools across their workflows, famously responding to reports of managers discouraging AI use with the rhetorical question: "Are you insane?"2 His directive that "every task that is possible to be automated with artificial intelligence to be automated" reflects a conviction that the path forward involves embracing AI augmentation rather than resisting it.2
Yet this aggressive automation stance coexists with Huang's assertion that Nvidia continues to hire aggressively-the company brought on "several thousand" employees in the most recent quarter and remains "probably still about 10,000 short" of its hiring targets.2 This apparent contradiction resolves when one understands Huang's underlying thesis: automation of tasks does not necessarily eliminate employment; rather, it transforms the nature of work. Workers freed from routine task execution can focus on higher-order problems, strategic initiatives, and creative endeavours that machines cannot yet replicate.
The Broader Intellectual Landscape: Theorists of Technological Disruption
Huang's framework aligns with and draws from several established schools of thought regarding technological change and employment. The distinction between task-based and skill-based labour disruption has been central to economic analysis of automation for decades. David Autor, an economist at MIT, has extensively documented how technological change tends to polarise labour markets, eliminating routine middle-skill jobs whilst creating demand for both high-skill and low-skill positions. Autor's research suggests that the jobs most vulnerable to automation are precisely those that Huang identifies-roles defined by repetitive, rule-based task execution.
Similarly, Erik Brynjolfsson and Andrew McAfee, in their influential work on the "second machine age," have argued that digital technologies create a bifurcated labour market. Their analysis suggests that whilst routine cognitive and manual tasks face displacement, roles requiring complex problem-solving, emotional intelligence, and creative synthesis remain resilient. This framework provides intellectual scaffolding for Huang's more granular assessment of disruption risk.
The concept of "task-biased technological change" has also been explored by economists including Daron Acemoglu, who has examined how different technologies affect different categories of work. Acemoglu's research distinguishes between technologies that augment human capability and those that substitute for it-a distinction that maps closely onto Huang's characterisation of AI as a tool-using agent rather than a wholesale replacement for human labour.
AI as Infrastructure: The Longer View
Huang has recently articulated an even broader vision of AI's role in the economy, describing it as "no longer a single breakthrough or application" but rather "essential infrastructure."4 This framing positions AI alongside electricity, telecommunications, and the internet as foundational technologies that reshape economic activity across all sectors. From this perspective, the question is not whether AI will disrupt particular jobs-it almost certainly will-but rather how societies and organisations manage the transition and capture the productivity gains that AI enables.
This infrastructure metaphor carries important implications. Just as the electrification of manufacturing in the early twentieth century eliminated certain categories of jobs whilst creating entirely new industries and employment categories, AI's integration into economic life will likely produce similar dynamics. The workers most at risk are those whose roles consist primarily of executing tasks that AI can perform more efficiently. Those whose work involves judgment, strategy, relationship-building, and creative problem-solving face a different calculus-one in which AI becomes a tool that amplifies their effectiveness rather than a replacement for their labour.
The Nvidia Perspective: Pragmatism and Self-Interest
It is worth noting that Huang's analysis, whilst intellectually coherent, also reflects Nvidia's commercial interests. As the world's most valuable publicly traded company with a market capitalisation of $4.8 trillion, Nvidia has profound incentives to promote narratives that encourage AI adoption and investment.1 Huang's argument that AI will augment rather than replace human labour serves to assuage concerns that might otherwise dampen investment in AI infrastructure and applications.
Nevertheless, the substance of his argument-that task-based roles face greater disruption risk than purpose-driven ones-appears robust across multiple analytical frameworks. The distinction he draws is not merely self-serving rhetoric but reflects genuine economic dynamics that scholars and analysts across the ideological spectrum have documented.
Implications for Workers and Organisations
Huang's framework offers practical guidance for both individuals and organisations navigating the AI transition. For workers, the implication is clear: roles that can be fully specified as a series of tasks face genuine disruption risk. Conversely, developing capabilities in areas that require judgment, creativity, and human connection-areas where AI remains substantially less capable-represents a rational career strategy. For organisations, the message is equally straightforward: the path to productivity gains and competitive advantage lies not in wholesale replacement of human workers but in strategic deployment of AI to handle routine tasks, thereby freeing human talent for higher-value work.
This perspective also suggests that the anxiety currently gripping software stocks may be partially misplaced. If AI functions as a tool that uses existing software platforms rather than replacing them, then companies like ServiceNow and SAP may find their market positions strengthened rather than weakened by AI adoption. The software industry's role would evolve from direct human interaction to serving as the infrastructure layer upon which AI agents operate-a shift in function but not necessarily in fundamental value.
The Unresolved Tensions
Despite the coherence of Huang's framework, important questions remain unresolved. The transition period during which task-based jobs are displaced but new opportunities have not yet fully emerged could prove economically and socially disruptive. The pace of AI advancement may outstrip the ability of workers and educational systems to adapt. And the distribution of AI's productivity gains remains uncertain-whether those gains will be broadly shared or concentrated among capital owners and highly skilled workers remains an open question that Huang's analysis does not fully address.
Furthermore, Huang's optimism about continued hiring at Nvidia and other technology companies may not generalise across the broader economy. Whilst Nvidia can afford to hire aggressively whilst automating tasks, smaller organisations with tighter margins may face different pressures. The aggregate labour market effects of widespread AI adoption remain genuinely uncertain, despite Huang's confident assertions.
Conclusion: A Nuanced View of Disruption
Huang's statement that task-based roles face significant disruption risk whilst purpose-driven work remains resilient represents a more intellectually honest assessment of AI's impact than either blanket optimism or apocalyptic pessimism. It acknowledges genuine disruption whilst suggesting that the disruption is neither universal nor necessarily catastrophic. The framework aligns with established economic analysis of technological change and provides practical guidance for individuals and organisations seeking to navigate the AI transition strategically. Whether this optimistic vision of augmentation rather than replacement ultimately proves accurate will depend on policy choices, investment decisions, and the pace of technological development in the years ahead.
References
1. https://economictimes.com/news/new-updates/nvidia-ceo-makes-big-remark-on-ai-threat-to-software-companies-jensen-huang-claims-i-think-the-markets-got-it-/articleshow/128806859.cms
2. https://fortune.com/2025/11/25/nvidia-jensen-huang-insane-to-not-use-ai-for-every-task-possible/
3. https://www.businessinsider.com/ai-software-tech-stocks-sell-off-nvidia-jensen-huang-illogical-2026-2
4. https://www.coloradoai.news/quote-of-note-jensen-huang-ai-is-no-longer-a-single-breakthrough-or-application/
!["If your job is the task, then you’re very highly [likely] going to be disrupted." - Quote: Jensen Huang - Nvidia CEO](https://globaladvisors.biz/wp-content/uploads/2026/03/20260324_09h30_GlobalAdvisors_Marketing_Quote_JensenHuang_GAQ.png)
|
| |
| |
"Multiple expansion refers to the increase in a company's valuation multiple at exit relative to the multiple paid at entry, holding operating performance constant." - Multiple Expansion
Multiple expansion occurs when an asset is purchased at one valuation multiple and subsequently sold at a higher valuation multiple, with the increase in multiple representing a source of investment returns independent of operational improvements.1,2 This concept forms a cornerstone of private equity investment strategy, particularly in leveraged buyouts (LBOs) and consolidation transactions.
Core Mechanics
At its essence, multiple expansion is a form of arbitrage.2 A private equity firm acquires a company trading at a lower earnings multiple-for example, 7.0x EBITDA-and exits the investment at a higher multiple, such as 10.0x EBITDA.1 The difference between entry and exit multiples directly enhances returns to equity investors, independent of any improvement in the underlying business's financial performance.
Consider a practical example: a financial sponsor acquires a company generating £10 million in EBITDA at 7.0x, resulting in a purchase enterprise value of £70 million. If the sponsor later sells the same company at 10.0x EBITDA (assuming EBITDA remains constant), the enterprise value rises to £100 million. The 3.0x multiple expansion-from 7.0x to 10.0x-creates £30 million in additional value, even though the underlying business has not improved operationally.1
Multiple Expansion in Consolidation Strategies
Multiple expansion proves particularly powerful in industry consolidation or "roll-up" strategies, where private equity firms acquire multiple smaller companies and combine them into a larger entity.3 Smaller companies typically command lower valuation multiples than larger competitors. For instance, a company with £500,000 to £1 million in EBITDA might trade at 4-7x EBITDA, whilst a company with £10 million in EBITDA might trade at 10x EBITDA.3
A concrete illustration demonstrates this principle: suppose a private equity firm acquires ten smaller companies, each generating £1 million in EBITDA and individually valued at 6x EBITDA (£6 million each). The total acquisition cost is £60 million. When consolidated into a single entity with £10 million in combined EBITDA, the aggregated company may command a 10x multiple, resulting in a £100 million valuation.3 The firm has created £40 million in value purely through multiple expansion, without requiring operational improvements.
Intrinsic versus Market Multiple Expansion
Multiple expansion can be decomposed into two components: market-driven and intrinsic.5 Market multiple expansion reflects broader economic and industry conditions that cause valuation multiples to rise across the sector. Intrinsic multiple expansion, by contrast, results from management actions and operational improvements that cause a portfolio company to outperform its market.5
Intrinsic multiple expansion is achieved through strategies such as expanding product or service offerings, entering new geographic markets, reducing customer concentration, implementing improved pricing strategies, forming strategic partnerships, executing complementary acquisitions, and divesting non-core assets.5 For example, if a company's EBITDA multiple increases from 5.0x to 6.5x (+30%) whilst the market multiple increases from 8.0x to 10.0x (+25%), the company has generated positive intrinsic multiple expansion of approximately 5% relative to market performance.5
Mathematical Framework
The equity return contribution from multiple expansion can be expressed as:
\text = \frac{\text - \text}{\text} \times 100\%
In the earlier example with entry at 7.0x and exit at 10.0x:
\text = \frac \times 100\% = 42.9\%
This return is realised purely from the change in valuation multiple, independent of EBITDA growth or leverage paydown.
Practical Considerations
Whilst multiple expansion offers significant return potential, several factors influence its realisation. Market conditions at exit substantially affect achievable multiples; economic downturns may compress multiples across industries, limiting expansion opportunities. Additionally, the initial purchase multiple reflects market perception of risk; companies purchased at low multiples often carry higher operational or market risk, which may persist through the holding period.2
Successful multiple expansion frequently requires integration and realisation of synergies. When combining acquired companies, private equity sponsors identify revenue synergies and cost-saving opportunities that enhance EBITDA, thereby supporting higher exit multiples.3 Without such operational improvements, achieving multiple expansion becomes dependent entirely on favourable market conditions at exit.
Historical Context and Key Theorist: Henry Kravis
Henry Kravis, co-founder of Kohlberg Kravis Roberts & Co. (KKR), stands as the seminal figure in popularising and systematising multiple expansion as a core private equity value creation driver. Born in 1944, Kravis revolutionised the leveraged buyout industry during the 1980s and 1990s, establishing KKR as one of the world's most influential private equity firms.
Kravis's relationship to multiple expansion stems from his pioneering work in LBO structuring and portfolio company management. During the 1980s, when KKR executed landmark transactions including the £24 billion acquisition of RJR Nabisco in 1989-then the largest LBO ever completed-Kravis demonstrated that substantial equity returns could be generated not merely through debt paydown or EBITDA growth, but through strategic acquisition of undervalued assets and their subsequent sale at market-appropriate multiples.
Kravis's investment philosophy centred on identifying companies trading below intrinsic value, improving operational performance through active management, and exiting when market conditions permitted multiple expansion. This approach required deep industry expertise, disciplined capital allocation, and patience in holding periods-principles that became foundational to modern private equity practice.
Born in Tulsa, Oklahoma, Kravis studied economics at Cornell University before earning an MBA from Columbia Business School. He joined Bear Stearns in 1969, where he worked alongside Jerome Kohlberg Jr., pioneering early LBO techniques. In 1976, Kravis and Kohlberg, along with George Roberts, established KKR, which grew to manage hundreds of billions in assets across multiple continents.
Kravis's legacy extends beyond transaction execution; he articulated and formalised the theoretical framework through which private equity creates value. His emphasis on multiple expansion as a distinct return driver-separate from operational improvement and leverage paydown-provided clarity to investors and shaped how the industry measures and communicates value creation. Through KKR's portfolio company management practices, Kravis demonstrated that multiple expansion could be systematically pursued through industry consolidation, operational excellence, and strategic capital deployment.
His work during the 1980s and 1990s established the template for modern private equity, wherein multiple expansion remains a primary objective alongside operational value creation. Kravis's influence persists in contemporary private equity strategy, particularly in consolidation plays and industry roll-ups, where the acquisition of smaller, lower-multiple businesses and their combination into larger, higher-multiple entities directly reflects principles he pioneered.
References
1. https://www.wallstreetprep.com/knowledge/multiple-expansion/
2. https://corporatefinanceinstitute.com/resources/valuation/multiple-expansion/
3. https://hillviewps.com/the-concept-of-multiples-expansion-how-most-private-equity-works/
4. https://multipleexpansion.com/2020/02/13/multiple-expansion-definition/
5. https://auxiliamath.com/how-pe-managers-drive-intrinsic-multiple-expansion/
6. https://www.youtube.com/watch?v=ngn7J61iRqA
7. https://www.divestopedia.com/definition/864/multiple-expansion/
8. https://kailashconcepts.com/multiple-expansion-and-stock-performance/
9. https://www.wallstreetoasis.com/resources/skills/valuation/multiple-expansion

|
| |
| |
"A new scientific truth does not generally triumph by persuading its opponents and getting them to admit their errors, but rather by its opponents gradually dying out and giving way to a new generation that is raised on it." - Max Planck - Nobel laureate
The observation that scientific progress often requires generational change rather than individual conversion represents one of the most candid reflections on the nature of scientific advancement. This principle emerged from the lived experience of one of the twentieth century's most transformative physicists, whose own struggles to gain acceptance for revolutionary ideas shaped his understanding of how science actually evolves.
Max Planck: The Reluctant Philosopher of Science
Max Planck (1858-1947) was a German theoretical physicist whose contributions fundamentally altered our understanding of matter and energy.1 As the originator of quantum theory, Planck discovered that energy is emitted in discrete packets called quanta, a finding that would eventually underpin modern physics and enable the theoretical frameworks of Einstein and subsequent generations of scientists.3 Yet despite the revolutionary nature of his work, Planck's path to recognition was neither swift nor universally celebrated.
Planck's reflection on scientific change emerged not from abstract philosophical speculation but from personal frustration. In his own words, recorded in his 1949 Scientific Autobiography, he expressed the pain of his experience: "It is one of the most painful experiences of my entire scientific life that I have but seldom…[succeeded] in gaining universal recognition for a new result, the truth of which I could demonstrate by a conclusive, albeit only theoretical proof."1 This candid admission reveals that Planck's principle was born from the gap between theoretical demonstration and practical acceptance-a gap he experienced acutely throughout his career.
The Genesis and Context of the Principle
Planck articulated his observation in his Scientific Autobiography, published posthumously in 1949 (originally in German in 1948, the year after his death). The fuller formulation reads: "A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it."2 He elaborated further: "An important scientific innovation rarely makes its way by gradually winning over and converting its opponents: it rarely happens that Saul becomes Paul. What does happen is that its opponents gradually die out, and that the growing generation is familiarized with the ideas from the beginning: another instance of the fact that the future lies with the youth."2
What makes this observation remarkable is that Planck himself identified it as such-he called it "a remarkable fact."1 The principle addresses a fundamental tension in scientific practice: despite science's claim to objectivity and rationality, it remains a deeply human endeavour, subject to the psychological, social, and biological constraints that govern all human activity. Planck recognised that the triumph of new scientific truths depends not primarily on the logical force of evidence, but on the passage of time and the natural succession of generations.
Interpreting Planck's Insight: Multiple Dimensions
Scholars have identified several complementary interpretations of Planck's principle, each illuminating different aspects of scientific change.1 One interpretation emphasises the age-related dimension: older scientists, having invested their careers and reputations in existing theoretical frameworks, may be psychologically and professionally resistant to paradigm shifts. Younger scientists, by contrast, encounter new ideas without the burden of prior commitment and can adopt them more readily.
A second interpretation connects Planck's observation to Karl Popper's philosophy of science, particularly the concept of falsifiability. Where Popper emphasised rational refutation of theories, Planck's principle suggests that scientific change operates through a different mechanism-not conversion through logical argument, but replacement through generational succession.1 This distinction matters: it implies that scientific progress may be less rational and more evolutionary than philosophers of science have traditionally assumed.
A third, perhaps most fundamental interpretation treats Planck's statement as a truism-an important but often overlooked truth about the biological reality of scientific practice.1 Science progresses not because individual minds are particularly malleable or rational, but because the human lifespan is finite. New theories need not convince everyone; they need only survive long enough for their proponents to train the next generation whilst their opponents eventually pass away. This interpretation emphasises that science, despite its aspirations to transcend human limitation, remains embedded in human biology and mortality.
The Principle in Practice: Quantum Theory and Beyond
Planck's own experience with quantum theory exemplifies his principle. When he first proposed that energy is quantised-emitted in discrete packets rather than continuously-the idea met with considerable resistance from the established physics community. Even Albert Einstein, who would later extend quantum ideas, initially had reservations. Yet within a generation, quantum mechanics became the foundation of modern physics, not because Planck's opponents suddenly saw the light, but because a new generation of physicists-including Werner Heisenberg, Erwin Schrödinger, and Paul Dirac-grew up with quantum ideas as their intellectual inheritance.
The principle has proven remarkably durable. In 1962, Thomas S. Kuhn cited Planck's insight in his landmark work The Structure of Scientific Revolutions, using it to support his argument that scientific progress occurs through paradigm shifts rather than gradual accumulation of knowledge.3 Economist Paul A. Samuelson popularised a more concise formulation-"Science progresses one funeral at a time"-which captured the principle's essence in memorable language.3 This phrasing, whilst somewhat macabre, underscores the principle's central claim: generational succession, not rational persuasion, drives scientific change.
The Broader Theoretical Landscape
Planck's principle intersects with several major theoretical frameworks in the philosophy and sociology of science. Thomas Kuhn's concept of paradigm shifts directly engages with Planck's observation: paradigms change not because scientists within the old paradigm convert to the new one, but because the old paradigm's defenders eventually retire and die, whilst younger scientists adopt the new paradigm from the outset.3 This process explains why scientific revolutions often appear sudden and discontinuous rather than gradual.
The principle also resonates with sociological studies of scientific knowledge. Rather than viewing science as a realm of pure rationality insulated from social and psychological factors, this perspective acknowledges that scientists are human beings embedded in social networks, professional hierarchies, and generational cohorts. Their acceptance or rejection of new ideas depends not only on evidence but on factors such as professional investment, social standing, and the timing of their entry into the field.
Furthermore, Planck's insight challenges the traditional image of scientific progress as a steady march toward truth. Instead, it suggests a more complex picture: scientific change involves both rational evaluation of evidence and irrational human factors such as professional pride, institutional inertia, and the simple fact of mortality. This does not diminish science's achievements; rather, it acknowledges that science succeeds despite-and sometimes because of-its human dimensions.
Limitations and Nuances
Whilst Planck's principle captures something important about scientific change, it requires qualification. Not all scientific progress depends on generational succession. Sometimes individual scientists do change their minds when confronted with compelling evidence. Moreover, the principle may apply differently across disciplines: experimental sciences with clear empirical benchmarks may see faster conversion of individuals than theoretical fields where evidence is more ambiguous. Additionally, in contemporary science with rapid communication and large collaborative teams, the generational mechanism may operate differently than it did in Planck's era.
The principle also risks oversimplifying the psychology of scientific belief. Scientists are not uniformly stubborn or open-minded; individual variation is substantial. Some older scientists prove remarkably receptive to new ideas, whilst some younger ones cling to outdated frameworks. Planck's statement describes a statistical tendency rather than an iron law.
Legacy and Contemporary Relevance
Planck's principle remains strikingly relevant in contemporary science. Recent empirical research has suggested that the principle holds true: studies examining citation patterns and the adoption of new theories across scientific fields have found evidence that scientific change does indeed correlate with generational succession.5 This finding validates Planck's cynical but penetrating observation about the human side of science.
The principle also offers perspective on current scientific controversies. When new theories encounter resistance from established researchers, Planck's insight suggests patience: the theory need not convince its opponents, only survive long enough to become the intellectual foundation of the next generation. This perspective neither dismisses the importance of evidence nor ignores the reality that scientific communities are composed of human beings with all their attendant limitations and biases.
Ultimately, Planck's principle stands as a humble acknowledgement that science, despite its extraordinary achievements, remains a human activity. Its progress depends not only on the power of ideas and the weight of evidence, but on the passage of time, the succession of generations, and the simple biological fact that we all eventually die. In recognising this, Planck offered not a cynical dismissal of science but a more realistic and ultimately more profound understanding of how human knowledge actually advances.
References
1. https://buyscience.wordpress.com/history-of-science/plancks-principle/
2. https://en.wikipedia.org/wiki/Planck's_principle
3. https://quoteinvestigator.com/2017/09/25/progress/
4. https://insertphilosophyhere.com/science-its-tricky/
5. https://www.chemistryworld.com/news/science-really-does-advance-one-funeral-at-a-time-study-suggests/3010961.article
6. https://www.ophthalmologytimes.com/view/moving-forward-does-science-progress-one-funeral-at-a-time-

|
| |
|