| |
|
A daily bite-size selection of top business content.
PM edition. Issue number 1278
Latest 10 stories. Click the button for more.
|
| |
"You can't take money with you. And if you can't do good things with it, you're a bloody fool." – Natie Kirsh
Few business stories illustrate the power of disciplined execution better than that of Natie Kirsh. From roots in South Africa’s grain and food sectors, he built a wholesale model around efficiency, scale and customer relevance, ultimately developing Jetro Holdings into one of the most significant food distribution businesses in the United States.1
That journey reached a new milestone in March 2026, when Sysco agreed to acquire Jetro Restaurant Depot for US$29,1 billion, with shareholders to receive US$21,6 billion in cash and 91,5 million Sysco shares.1, 2 At roughly R499 billion, the transaction stands as one of the largest international deals associated with a Southern African entrepreneur, highlighting the scale of value that can be created through patient, operationally focused growth.1
A business case study in focused scale
Kirsh’s achievement was not built on speculative markets or short-term financial engineering. It was built on a clear and demanding model: serving independent retailers and restaurants with reliable access to bulk goods at competitive prices.1 He established Jetro Cash & Carry in New York in 1976 and expanded it into a national platform by solving a practical sourcing problem for smaller operators.1, 3
What makes the case strategically important is the clarity of the proposition. Jetro and Restaurant Depot served a fragmented customer base, addressed a real operating pain point and scaled through consistency rather than constant reinvention.1 For the 2025 calendar year, the business generated approximately US$16 billion in revenue, around US$2,1 billion in EBITDA and about US$1,9 billion in free cash flow, while operating 166 warehouses across 35 US states.2, 4
The acquisition also validates the strategic attractiveness of the channel itself. Sysco said the deal would expand its position in a higher-margin, growing and resilient cash-and-carry segment, giving it stronger access to independent food businesses.2 In practical terms, Kirsh did not simply build a successful company; he built an asset important enough to alter the structure of a major market.2
Capital, legacy and philanthropy
Large liquidity events inevitably raise a second question: what should happen next to wealth on this scale? Kirsh’s own remark points towards one answer — that capital should have purpose beyond accumulation alone.1 That idea sits comfortably alongside a broader international expectation that exceptional wealth should increasingly be matched by exceptional public contribution.5
The best-known expression of that principle is the Giving Pledge, launched by Bill Gates, Melinda French Gates and Warren Buffett, which encourages the world’s wealthiest individuals and families to commit more than half their wealth to philanthropy.5 Whether through formal pledges or quieter long-term giving, the principle is similar: great fortunes create the capacity to support education, healthcare, social cohesion and opportunity at a scale few institutions can match.5
In that sense, philanthropy is not separate from business legacy; it is one of its highest expressions. For founders who have already shown an ability to allocate capital with discipline in commerce, the next test is whether they can deploy it just as thoughtfully in service of society.5
References
SA Jewish Report – Natie Kirsh exits food empire in US$29 billion deal
Sysco investor release – Sysco to acquire Jetro Restaurant Depot
Nathan Kirsh – background and business profile
Financial content syndication of Sysco announcement – operating footprint and revenue
The Giving Pledge – overview
Forward – Natie Kirsh and the Shine A Light campaign
|
| |
| |
"A blockchain is a shared, immutable, decentralized digital ledger that records transactions in 'blocks' cryptographically linked into a 'chain,' creating a secure, transparent, and tamper-proof history of data, validated by network participants instead of a central authority." - Blockchain
A blockchain is a distributed ledger technology that enables the secure sharing and recording of information across a network of participants without requiring a central authority or intermediary.1 At its core, blockchain functions as a decentralised database where data is stored in cryptographically linked blocks, with each block containing a cryptographic hash of the previous block, a timestamp, and transaction data.2 This creates an immutable chain where any attempt to alter historical records would require changing all subsequent blocks and gaining consensus from the network majority, making tampering virtually impossible.2
Core Technical Characteristics
Blockchain technology operates on three fundamental principles. First, it employs cryptographic security, requiring both a public key (the address in the database) and a private key (an individualised authentication credential) to access or add data.1 Second, it functions as a fully digital ledger where transactions are recorded chronologically and permanently online.1 Third, it is distributed across a public or private network, with copies simultaneously updated across all participating nodes rather than existing in a single location.1,3
The structural integrity of blockchain relies on cryptographic hashing, where each block contains a unique identifier derived from its data and the previous block's hash. This iterative process confirms the integrity of every preceding block back to the genesis block (Block 0).2 The immutability of blockchain stems from its design: new blocks are added sequentially, and the probability of an entry being superseded decreases exponentially as more blocks are built upon it.2
Consensus and Validation Mechanisms
When new data is added to the network, the majority of nodes must verify and confirm its legitimacy through consensus mechanisms-protocols that use either permissions or economic incentives to reach agreement.1 Common consensus algorithms include proof of work (PoW), where computational effort validates transactions, and proof of stake (PoS), where validators are chosen based on their stake in the network.4 Once consensus is reached, a new block is created and attached to the chain, and all nodes are updated to reflect the revised ledger.1
Key Advantages and Applications
Blockchain eliminates the need for trusted intermediaries such as banks, enabling direct peer-to-peer transactions with built-in security and transparency.3 The technology solves the long-standing problem of double-spending-ensuring that each unit of value is transferred only once-by creating a verifiable, permanent record.2 Its applications extend across multiple sectors: cryptocurrencies use blockchain to validate coin ownership; supply chain management employs it to track asset movements with temperature and condition data; and healthcare and finance leverage it for secure, auditable transaction records.4 The decentralised nature reduces fraud risk, improves efficiency, and enhances accountability without requiring a central authority to oversee transactions.4
Public, Private, and Hybrid Variants
Blockchain networks exist in multiple forms. Public blockchains, such as Bitcoin, allow anyone to open a wallet or become a node, creating fully transparent networks.1 Private blockchains restrict participation to known entities and are more applicable to banking and fintech, where organisations need precise control over who participates and accesses data.1 Consortium blockchains and hybrid blockchains combine aspects of both, offering flexibility for specific organisational needs.1
Don Stuart Tapscott: Blockchain's Contemporary Theorist
Don Tapscott, a Canadian technology strategist and business executive, has emerged as one of the most influential contemporary theorists in blockchain adoption and its societal implications. Born in 1944, Tapscott built his career as a futurist and organisational strategist, initially gaining prominence through his work on the digital economy and generational theory, particularly his concept of the "Net Generation." His intellectual trajectory positioned him uniquely to recognise blockchain's transformative potential beyond cryptocurrency.
Tapscott's relationship with blockchain deepened significantly in the mid-2010s when he co-authored Blockchain Revolution (2016) with his son Alex Tapscott, a work that became foundational in mainstream discourse about blockchain's applications. Rather than viewing blockchain merely as a technical innovation, Tapscott framed it as a paradigm shift in how trust, value, and information are exchanged in society. He articulated how blockchain could disintermediate industries-removing unnecessary middlemen and returning power to individuals and organisations.
His strategic framework emphasises blockchain's capacity to create what he terms "the Internet of Value," where assets, intellectual property, and identity can be transferred as seamlessly as information currently flows across the internet. Tapscott has consistently advocated for blockchain's application in governance, supply chain transparency, and financial inclusion, particularly for unbanked populations in developing economies. His work bridges the gap between technical blockchain developers and business leaders, translating cryptographic concepts into strategic imperatives for organisational transformation.
Tapscott's influence extends through his advisory roles with governments and international organisations, where he has promoted blockchain literacy and policy frameworks. His emphasis on blockchain as a tool for decentralisation and democratisation-rather than merely a speculative asset class-has shaped how institutional leaders conceptualise the technology's long-term value. His biographical arc from digital economy theorist to blockchain strategist exemplifies how foundational understanding of technological disruption enables recognition of paradigm-shifting innovations.
References
1. https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-blockchain
2. https://en.wikipedia.org/wiki/Blockchain
3. https://mitsloan.mit.edu/ideas-made-to-matter/blockchain-explained
4. https://www.ibm.com/think/topics/blockchain
5. https://aws.amazon.com/what-is/blockchain/
6. https://www.pwc.com/us/en/industries/financial-services/fintech/bitcoin-blockchain-cryptocurrency.html
7. https://www.youtube.com/watch?v=qQJOdRFsdsg

|
| |
| |
"This is now how I deal with anxiety... I first break it down, and then I'm gonna tell myself, 'Okay, there are some things you can do something about, there's some things you can't do anything about. But for the stuff that you can do something about, let's reason about it and let's go do it.'" - Jensen Huang - Nvidia CEO
This quote comes from Jensen Huang's appearance on the Lex Fridman Podcast #494, titled "NVIDIA - The $4 Trillion Company & the AI Revolution," released on March 23, 2026.[SOURCE]
Context from Recent Interviews
Huang has openly discussed his constant state of anxiety, describing it as a driving force fueled by Nvidia's near-bankruptcies in the 1990s and a persistent fear of failure. He maintains the mindset that Nvidia is always "30 days from going out of business," even as the company reached a $5 trillion valuation.1,2,3
- Huang admits to working 7 days a week, including holidays, in a "constant state of anxiety," checking emails from 4 a.m. daily.2,3
- He views vulnerability and suffering as essential to leadership, stating that fear of failure motivates him more than ambition or success.1,3
- Success, per Huang, involves "long periods of loneliness, humiliation, and fear," but embracing these builds resilience.1,2
Leadership Insights
Huang emphasizes that leaders should not pretend to be perfect, as openness to mistakes enables adaptation. This anxiety management technique aligns with his philosophy: reason through controllable factors and act decisively, while accepting the uncontrollable.[SOURCE]1
Tags: Jensen Huang, Nvidia, Lex Fridman, disruption, AI, artificial intelligence, quote, leadership, resilience, anxiety
References
1. https://economictimes.com/magazines/panache/nvidia-ceo-jensen-huang-says-he-is-always-in-a-state-of-anxiety-reveals-the-fear-that-fuels-his-drive/articleshow/125768298.cms
2. https://fortune.com/2025/12/04/nvidia-ceo-admits-he-works-7-days-a-week-including-holidays-in-a-constant-state-of-anxiety-out-of-fear-of-going-bankrupt/
3. https://www.businessinsider.com/nvidia-ceo-jensen-huang-joe-rogan-2025-12
4. https://www.easttexasreview.com/nvidia-ceos-consuming-anxiety-solutions/
|
| |
| |
"There's no question OpenClaw is the iPhone of tokens." - Jensen Huang - Nvidia CEO
This statement reflects Huang's broader vision of OpenClaw as a transformative platform. In related remarks at the Morgan Stanley Technology, Media and Telecom Conference on March 4, 2026, Huang described OpenClaw as "probably the single most important release of software, probably ever," noting that it surpassed Linux in downloads within just three weeks-a feat that took Linux approximately 30 years to achieve.1
The "iPhone of tokens" metaphor positions OpenClaw as a foundational, consumer-friendly platform that democratizes access to AI agent infrastructure, much as the iPhone revolutionized mobile computing. This aligns with Huang's broader strategic messaging about tokens becoming the new commodity in AI infrastructure and his announcement that Nvidia engineers will receive annual inference budgets worth $100,000 to $150,000 in AI compute credits.4
Context: OpenClaw is Nvidia's open-source framework designed for AI agents-autonomous systems capable of continuous operation and complex task execution.1 The platform's rapid adoption and subsequent security vulnerabilities have made it a focal point in discussions about AI infrastructure scalability and risk management in enterprise environments.
References
1. https://globaladvisors.biz/2026/03/06/quote-jensen-huang-nvidia-ceo-3/
2. https://www.eweek.com/news/nvidia-inference-ai-economy-agents-gtc-2026/
3. https://www.youtube.com/watch?v=kDd24YOeqQQ
4. https://buttondown.com/the200dollarceo/archive/jensen-huang-will-pay-engineers-150k-in-ai-tokens/
|
| |
| |
"Prediction markets are online exchanges where people trade contracts on the outcomes of future events, aggregating collective wisdom to forecast results, with contract prices reflecting the market's perceived probability of an event, like an election or economic data, occurring." - Prediction market
Prediction markets are online platforms where participants trade contracts tied to the outcomes of future events, such as elections, economic indicators, or corporate milestones. These contracts, often binary in nature, pay out a fixed amount-typically $1-if the event occurs and nothing otherwise, with their prices directly reflecting the market's collective assessment of the event's probability.1,2,3 This mechanism harnesses the wisdom of crowds, incentivising traders with financial stakes to reveal their information, often outperforming expert forecasts or polls due to the skin-in-the-game dynamic.1,2
How Prediction Markets Function
Trading occurs via mechanisms like continuous double auctions, automated market makers, or parimutuel pools, enabling efficient price discovery.1 For instance, a contract trading at 72 cents implies a 72% perceived probability of the event.3 Contract types include:
- Winner-take-all: Binary yes/no payouts, most common for discrete events.1,6
- Index contracts: Payouts varying continuously, e.g., based on vote shares or sales figures, reflecting expected values.1,6
- Combinatorial markets: Bets on outcome combinations, enhancing conditional probability incorporation.2
Markets can use real or virtual currency, with public examples like PredictIt (politics/finance), Polymarket (decentralised on blockchain), and Metaculus (reputation-based forecasting).2,4
Applications and Evidence of Efficacy
Corporations leverage internal prediction markets for project timelines, sales forecasts, risk assessment, and strategic planning.1,2 Eli Lilly used them in 2005 to predict drug trial success; Google for product launches and office openings.2 Studies show superior accuracy, e.g., forecasting Iowa flu outbreaks weeks ahead.2 Eric Zitzewitz notes their efficiency akin to financial markets.2
Key Theorist: Robin Hanson and the Genesis of Formal Prediction Market Theory
Robin Hanson, an economist renowned for pioneering prediction markets as tools for information aggregation, stands as the preeminent theorist. Born in 1958 in Chicago, Hanson earned a physics BS from the University of California, Riverside (1981), followed by astrophysics study at the University of Chicago. Shifting to social sciences, he obtained an MA in physics (1984) and PhD in social science from Caltech (1990), with a thesis on 'The Dynamics of an Astronomy Research Project'.2
Hanson's seminal contributions began in the 1990s at Lockheed and NASA, modelling organisations via market processes. In 1998, his paper 'Shall We Vote on Values, But Bet on Outcomes, Or Both?'-later titled 'Combinatorial Information Market Design'-proposed log scoring rules for subsidised markets, enabling cheap, truth-revealing forecasts even without skin in the game.2 As research associate at Future of Humanity Institute and professor at George Mason University, Hanson developed Futarchy: governance by betting on policies' outcomes rather than voting on values. His 2003 paper 'Shall We Vote on Values but Bet on Outcomes?' formalised this, arguing prediction markets elicit honest beliefs better than surveys. Books like The Age of Em (2016) extend his futurology. Hanson's work underpins platforms like Augur and theoretical validations of market efficiency in aggregating dispersed knowledge.1,2
Critics highlight risks like manipulation or thin liquidity, yet empirical evidence affirms their forecasting prowess across politics, business, and science.1,2,3
References
1. https://corporate.jasoncollins.blog/prediction-markets
2. https://en.wikipedia.org/wiki/Prediction_market
3. https://www.metrotrade.com/what-is-a-prediction-market/
4. https://corporatefinanceinstitute.com/resources/career-map/sell-side/capital-markets/prediction-market/
5. https://www.greenbook.org/marketing-research/prediction-markets-for-concept-testing-04799
6. https://wifpr.wharton.upenn.edu/blog/a-primer-on-prediction-markets/
7. https://a16zcrypto.com/posts/podcast/prediction-markets-explained/

|
| |
| |
"It's a reasonable thing to expect the end of disease." - Jensen Huang - Nvidia CEO
This quote comes from Lex Fridman Podcast #494, recorded with Jensen Huang discussing NVIDIA's pivotal role in the AI revolution. At timestamp 02:22:50, Huang remarked: "How can you not be romantic about that? The fact that there is a-it's a reasonable thing to expect the end of disease."1
Context from the Podcast
- Huang highlights AI's transformative power in healthcare, positioning NVIDIA as the engine driving these advancements.
- The conversation emphasizes Huang's leadership, engineering insights, and bold decisions fueling NVIDIA's success.
- Lex Fridman introduces NVIDIA as "one of the most important and influential companies in the history of human civilization."1
Broader Discussion Themes
Huang elaborates on manifesting a compelling future through belief, acknowledging interim suffering but stressing conviction: "You manifest a future and that future is so convincing, there's no way it won't happen."3
The podcast explores AI disruption, AGI, and NVIDIA's $4 trillion valuation amid the AI boom[SOURCE].
Related Concepts
While unrelated to Huang's quote, academic discussions reference "the end of disease" in contexts like positive psychology's impact on health, shifting from disease absence to flourishing well-being2.
Tags: Jensen Huang, Nvidia, Lex Fridman, disruption, AI, artificial intelligence, quote, AGI
References
1. https://lexfridman.com/jensen-huang-transcript/
2. https://pure.rug.nl/ws/portalfiles/portal/99196915/Complete_thesis.pdf
3. https://lexfridman.com/author/lex-fridman/
4. http://www.srpskiarhiv.rs/dotAsset/89044.pdf
|
| |
| |
"I would love it if the entire world, those eight billion people, could come together and just be hoping and praying for us to get that acquisition of signal and be back in touch with everybody." - Victor Glover - Artemis II Mission specialist
Humanity floats alone in a universe that has fallen eerily silent for over half a century. The last deliberate signals from another civilisation arrived in 1974 from the Arecibo Observatory, a binary-encoded greeting beamed towards the globular cluster M13, 25,000 light-years distant. Since then, no confirmed extraterrestrial transmissions have pierced Earth's radio telescopes, leaving our species in a void of unanswered calls. This cosmic quietude underscores a fundamental tension in space exploration: while missions like NASA's Artemis II push human boundaries, they amplify our yearning for contact beyond our solar system. Victor Glover, Artemis II mission specialist, voiced this ache for global unity in pursuit of reacquiring lost signals, highlighting how lunar ambitions intersect with the search for extraterrestrial intelligence (SETI)1.
Artemis II: Humanity's Boldest Step Since Apollo
Artemis II represents NASA's most ambitious human spaceflight since Apollo 17 in 1972, designed to send four astronauts-Reid Wiseman, Victor Glover, Christina Koch, and Jeremy Hansen-on a 10-day orbital trajectory around the Moon. Launching no earlier than September 2025 atop the Space Launch System (SLS) rocket with the Orion spacecraft, the crew will venture 400,000 kilometres from Earth, traversing the Moon's far side where direct communication with Houston ceases. This milestone tests Orion's life support, propulsion, and re-entry systems at lunar distances, paving the way for Artemis III's planned 2026 crewed lunar landing near the Moon's south pole1. Glover's role as pilot demands precision navigation through deep space, where microseconds of delay challenge real-time control, mirroring the delays in interstellar communication.
The mission's technical stakes are immense. Orion's European Service Module, powered by solar arrays spanning 47 square metres, must sustain the crew through a free-return trajectory that slingshots around the Moon without landing. Radiation exposure in the Van Allen belts and beyond poses risks untested in human flight since Apollo, with Artemis II validating countermeasures for prolonged deep-space exposure. These feats address strategic imperatives: reasserting U.S. leadership in crewed exploration amid competition from China's Chang'e programme, which landed taikonauts on the Moon in 2024 simulations and eyes a 2030 base[2].
Glover's Wish: Bridging Lunar Triumph and SETI's Long Silence
Victor Glover's aspiration for eight billion people to unite in hope for signal reacquisition taps into SETI's foundational dream. The field began earnestly in 1960 with Frank Drake's Project Ozma, scanning two Sun-like stars for 1420 MHz hydrogen-line signals, yielding silence. Decades later, the Wow! signal of 1977-a 72-second burst at 1420 MHz from Sagittarius-remains the most tantalising anomaly, never repeated despite searches. Glover's words evoke this history, framing Artemis II not merely as a lunar loop but as a symbol of humanity's outward gaze. As a Black Navy pilot and father, Glover embodies NASA's diversity push, his selection in 2013 marking a shift from Apollo-era homogeneity1.
His statement reveals a deeper capability tension: space missions generate vast data streams ripe for SETI repurposing. Apollo-era tapes, declassified in 2009, included ham radio chatter mistaken for anomalies until debunked. Artemis II's high-bandwidth links could scan lunar vicinities for natural or artificial signals, though primary goals prioritise human safety. NASA's Deep Space Network (DSN), already used for SETI@home distributed computing until 2020, stands ready to process Orion's telemetry for serendipitous detections[3].
Technological Tensions in Pursuit of Alien Signals
Reacquiring a signal demands overcoming astronomical hurdles. Interstellar distances impose light-year delays; a reply to Arecibo's message arrives in AD 27,000. Narrowband signals, hallmarks of intelligence, drown in cosmic noise from pulsars, quasars, and our own megawatt radars. Modern SETI leverages machine learning: Breakthrough Listen, scanning one million stars since 2015, employs AI to sift petabytes from Green Bank and Parkes telescopes, identifying candidates like BLC1 in 2019 (later attributed to human interference)[4].
Artemis II amplifies these tensions. Flying beyond low-Earth orbit exposes Orion to unfiltered cosmic rays, potentially disrupting electronics sensitive to SETI frequencies. Yet the mission's position offers vantage: lunar orbit provides a stable platform absent Earth's ionosphere interference. Strategic debates rage over dual-use: should NASA divert Artemis resources to SETI, or focus on Mars pathways? Critics argue lunar flybys distract from robotic precursors like VIPER rover, launching 2024 to map lunar water ice[5]. Proponents counter that human presence inspires global investment, echoing Glover's call for unified hope.
Debates and Objections: Fermi Paradox and Existential Risks
The silence Glover yearns to break fuels the Fermi Paradox: where are they? Enrico Fermi's 1950 query highlights discrepancies between extraterrestrial likelihood (Drake Equation estimates billions of civilisations) and zero evidence. Objections abound. Rare Earth hypothesis posits Earth-like worlds as statistical freaks, requiring plate tectonics, large moons, and Jupiter-like shields[6]. Great Filter theories suggest civilisations self-destruct via nuclear war, AI, or climate collapse before signalling.
SETI sceptics like Frank Tipler decry funding diversion from terrestrial crises, estimating detection odds below 10^-9 per star[7]. Optimists, including Jill Tarter, advocate persistence; the Allen Telescope Array continues 24/7 monitoring. Glover's plea counters cynicism, positing collective prayer as psychological amplifier. Psychologically, such unity could mitigate space race geopolitics, where Russia-Artemis tensions and India's Chandrayaan-3 success (2023 south pole landing) fragment efforts[8]. Objections to anthropocentrism persist: signals might use optical lasers or neutrinos, evading radio hunts.
Strategic Implications for Space Policy and Global Unity
Glover's vision challenges fragmented space agendas. Artemis Accords, signed by 40 nations, promote lunar norms but exclude China-Russia's rival station. Unified SETI hoping could transcend pacts, fostering goodwill. Technologically, it spotlights private sector surges: SpaceX's Starship, eyeing 2026 lunar refuelling, dwarfs SLS thrust; Blue Origin's New Glenn competes for Artemis V[9]. These dynamics pressure NASA: Artemis II's success hinges on flawless execution amid 2025 delays from heat shield anomalies.
Market implications ripple. SETI tech spin-offs-AI signal processing-bolster defence and telecoms. Global unity for signals could mobilise crowdfunding, akin to Planetary Society's LightSail. Strategically, reacquisition reframes humanity: from isolated tribe to galactic participant, spurring investment in 1,000-km telescopes like China's FAST or lunar far-side arrays planned for 2030s[10].
Why Pursuit Matters: Inspiration Amid Uncertainty
The quest Glover champions matters because it confronts existential aloneness. In a 2026 world grappling AI risks and climate tipping points, cosmic perspective humbles hubris. Artemis II, by humanising deep space, reignites Apollo magic: 1969's landing drew 650 million viewers, galvanising STEM[11]. Success could swell NASA's $25 billion budget, funding SETI revivals like NASA's anticipated 2028 Pathfinder.
Ultimately, the silence tests resilience. Whether signals arrive or not, the striving-eight billion voices in hope-affirms our capacity for wonder. Artemis II, looping the Moon, embodies this: not endpoint, but launchpad for stars. Glover's words remind that exploration thrives on shared dreams, turning technological tension into transcendent purpose.
References
- Artemis II: Inside the Moon mission to fly humans further than ever. BBC News. bbc.co.uk
- China's Lunar Exploration Program. CNSA. 2025 update.
- SETI@home Legacy. UC Berkeley.
- Breakthrough Listen BLC1 Analysis. Nature, 2021.
- VIPER Mission Overview. NASA, 2024.
- Ward & Brownlee, Rare Earth, 2000.
- Tipler, Extraterrestrial Beings Do Not Exist, 1980.
- Chandrayaan-3 Success. ISRO, 2023.
- Starship Lunar Lander Contract. NASA, 2021.
- Luokung FAST Telescope. CAS, 2020.
- Apollo 11 Viewership Data. Nielsen Archives.
References
1. Artemis II: Inside the Moon mission to fly humans further than ever - https://www.bbc.co.uk/news/resources/idt-86aafe5a-17e2-479c-9e12-3a7a41e10e9e

|
| |
| |
"Venture capital (VC) is private funding provided to high-potential, early-stage startups and emerging companies in exchange for an equity stake, aiming for significant growth and returns, often accompanied by mentorship and expertise beyond just capital." - Venture Capital (VC)
Venture capital represents a distinctive form of private equity financing in which investors or investment funds provide capital to early-stage and emerging companies demonstrating high growth potential, in exchange for an equity stake in the business.1,3 Unlike traditional bank lending, which relies on collateral and fixed repayment schedules, venture capital operates on a fundamentally different principle: investors accept significant risk in pursuit of substantial returns, whilst founders retain access to expertise, networks, and strategic guidance that often prove as valuable as the capital itself.1
Core Characteristics and Structure
Venture capital investments are characterised by several defining features that distinguish them from conventional financing. The investments are illiquid, meaning capital remains locked into portfolio companies for extended periods rather than being readily convertible to cash.2 Venture capitalists typically maintain a long-term investment horizon, recognising that startups often operate at a loss for years before achieving profitability.2 This contrasts sharply with traditional lending, where focus centres on stable cash flows and lower risk.1
The venture capital model embraces a high-risk, high-reward framework.1 Venture capitalists acknowledge that a portion of their investments will inevitably fail, but structure their portfolios to balance these losses against gains from successful companies that may return ten times or more the initial investment.6 This portfolio approach allows individual failures to be offset by exceptional successes.
Structurally, venture capital funds typically operate as partnerships.2 The venture capital firm and its principals serve as general partners, whilst investors-including pension funds, university endowments, insurance companies, and wealthy individuals-function as limited partners with passive investment roles.2 Limited partners contribute capital but exercise minimal day-to-day control, with the general partners retaining management authority and receiving approximately 20% of profits, whilst the remaining 80% is distributed pro-rata amongst limited partners.2
Investment Stages and Process
Venture capital operates across multiple funding stages. Pre-seed stage capital assists entrepreneurs in developing initial concepts, often through business incubators and accelerators that connect founders with venture networks.2 Subsequent rounds-seed, Series A, B, and beyond-provide progressively larger capital injections as companies demonstrate traction and growth potential.2
Venture capitalists engage in rigorous assessment of potential investments, evaluating companies based on leadership quality, market opportunity, and scalability potential.4 In exchange for funding, VCs receive not merely equity ownership but also significant control over company decisions.3 This involvement extends beyond passive shareholding; venture capitalists typically bring managerial and technical expertise, actively participating in strategic decisions and governance.3
Target Companies and Industries
Venture capital targets companies operating in innovative sectors experiencing rapid change and disruption potential, particularly technology, biotechnology, and consumer products.1 These ventures are characterised by limited operating history, insufficient scale for public market access, and inability to secure traditional bank financing.3 Venture capital proves especially attractive for companies with ambitious growth trajectories that require rapid scaling beyond what conventional financing mechanisms can support.1
The Equity Exchange Principle
The fundamental transaction underlying venture capital differs markedly from debt financing. Rather than receiving loans requiring repayment with interest, founders exchange equity ownership for capital and strategic support.1,2 This arrangement aligns investor and founder interests around company growth, as both parties benefit from successful scaling. However, this structure necessarily involves equity dilution for founders and investor oversight that may constrain operational autonomy.1
Beyond Capital: Value-Added Services
Venture capital's value proposition extends substantially beyond financial injection. Investors provide mentorship, facilitate networking connections, assist in refining product-market fit, and establish strategic alliances.1 For startups, these intangible benefits-credibility, expertise, and access to networks-often prove as transformative as the capital itself.1 This comprehensive support model distinguishes venture capital from traditional lending, where the lender's involvement typically concludes once funds are disbursed.
Risk Characteristics and Investor Profile
Venture capital investors must demonstrate exceptional risk tolerance, recognising that many portfolio companies will fail whilst maintaining conviction in the high-growth potential of selected investments.4 Successful venture capitalists develop sophisticated judgment regarding when to accept or decline risk exposure.4 The investment horizon typically spans many years, as startups require extended periods to mature and generate returns.4
A notable characteristic involves large discrepancies between private and public valuations.2 Early-stage private companies often trade at valuations substantially below what comparable public companies command, reflecting both risk premium and illiquidity discount. This valuation gap creates opportunity for venture investors but also underscores the speculative nature of early-stage investing.
Strategic Theorist: Donald Valentine and the Sequoia Capital Model
Donald Valentine (1932-2019) stands as the preeminent theorist and practitioner whose vision fundamentally shaped modern venture capital philosophy and practice. Valentine's career and intellectual contributions established the conceptual framework that transformed venture capital from opportunistic investing into a systematic, professionalised discipline focused on identifying and nurturing transformative companies.
Valentine founded Sequoia Capital in 1972, establishing what would become one of the world's most influential venture capital firms. His approach revolutionised venture capital practice by introducing rigorous analytical frameworks for company evaluation, emphasising the importance of market size, team quality, and competitive positioning rather than relying on intuition or personal connections alone. Valentine articulated a clear thesis: venture capital should target companies addressing large, growing markets with the potential to achieve dominant market positions and generate exceptional returns.
His relationship to venture capital theory centred on several key principles that remain foundational today. First, Valentine championed the concept of market-driven investing-the conviction that venture capital should focus on companies operating in expanding markets rather than attempting to create demand for marginal innovations. This principle directly informed his most celebrated investment decisions, including early backing of Apple Computer, Atari, and Oracle, all companies addressing nascent but rapidly expanding technology markets.
Second, Valentine elevated the importance of founder and team assessment to paramount significance. He recognised that early-stage company success depended less on detailed business plans than on founder capability, vision, and determination. This insight shifted venture capital practice away from financial projections towards qualitative evaluation of entrepreneurial talent-a methodology that remains standard practice.
Third, Valentine formalised the venture capital fund structure and professionalised limited partner relationships. He demonstrated that venture capital could operate as a repeatable, institutional business model rather than ad-hoc investing by wealthy individuals. This professionalisation attracted institutional capital from pension funds and endowments, transforming venture capital from a niche activity into a major asset class.
Valentine's biographical trajectory illuminates his influence. Born in Portland, Oregon, he studied geology at the University of Oregon before entering the technology industry during its infancy. His early career included roles at Fairchild Semiconductor and National Semiconductor, providing direct exposure to semiconductor industry dynamics and the entrepreneurial ecosystem emerging in Silicon Valley. This operational background distinguished Valentine from purely financial investors; he possessed technical understanding and industry networks that informed his investment judgement.
His founding of Sequoia Capital represented a deliberate departure from existing venture capital practice. Whilst earlier venture investors often operated as individual partners or small syndicates, Valentine established Sequoia as an institutionalised partnership with systematic processes, documented investment criteria, and structured follow-on support for portfolio companies. This model proved extraordinarily successful, generating returns that established Sequoia's reputation and attracted superior deal flow and limited partner capital.
Valentine's intellectual contribution extended to articulating venture capital's role within the broader innovation ecosystem. He argued persuasively that venture capital functioned as a crucial mechanism for translating technological innovation into commercial products and services, channelling capital towards entrepreneurs whose visions exceeded their personal financial resources. This perspective elevated venture capital from mere profit-seeking to a socially valuable function supporting technological progress and economic dynamism.
His investment philosophy emphasised concentrated conviction-the willingness to make substantial bets on companies and founders in whom he possessed high confidence, rather than diversifying thinly across numerous marginal opportunities. This approach reflected confidence in analytical capability and willingness to accept concentrated risk in pursuit of exceptional returns.
Valentine's legacy fundamentally shaped how venture capital operates today. The emphasis on market size, team quality, systematic evaluation, and institutional structure that characterises modern venture capital practice derives substantially from principles he articulated and demonstrated through Sequoia Capital's success. His career demonstrated that venture capital could simultaneously generate exceptional financial returns whilst supporting transformative technological innovation-a duality that continues motivating venture capital investment.
References
1. https://www.oddo-bhf.com/resources-your-gateway-to-a-wealth-of-knowledge/corporate-finance-resources/venture-capital-definition-opportunities-amp-strategies/
2. https://corporatefinanceinstitute.com/resources/career-map/sell-side/capital-markets/what-is-venture-capital/
3. https://en.wikipedia.org/wiki/Venture_capital
4. https://www.geeksforgeeks.org/finance/venture-capital-funding-characteristics-investment-process-advantages-disadvantages/
5. https://www.growthcapitalventures.co.uk/venture-capital
6. https://stripe.com/resources/more/what-is-venture-capital
7. https://www.alphajwc.com/en/characteristics-of-venture-capital/

|
| |
| |
"Mistakes happen. As a team, the important thing is to recognize it's never an individuals's fault - it's the process, the culture, or the infra." - Boris Cherny - Claude Code, Anthropic
Publishing over 500,000 lines of proprietary TypeScript source code to a public npm package represents a critical failure in release pipelines for AI tools like Claude Code1,2,3. This incident stemmed from including an unstripped source map file (cli.js.map) in version 2.1.88, which referenced a 59.8 MB zip archive on Anthropic's Cloudflare R2 bucket, allowing anyone to download and reconstruct the full codebase of roughly 1,900-2,200 files1,2,3,5,8. The exposed material detailed the 'harness'-the agentic software layer that orchestrates Claude's interactions with tools, enforces guardrails, and manages multi-agent coordination-without revealing model weights or customer data1,8.
Anthropic classified this as a 'release packaging issue caused by human error,' not a security breach, attributing it to a shortcut that bypassed safeguards during a rushed upload of internal code instead of the production bundle1,2,5. This occurred just days after another lapse where nearly 3,000 files, including a draft blog on the 'Mythos' or 'Capybara' model with cybersecurity risks, became publicly accessible1. Such errors highlight vulnerabilities in automated build processes for agentic AI products, where the harness code is as valuable as the model itself for replication or reverse-engineering1,8.
Claude Code, Anthropic's flagship CLI tool generating $2.5 billion in annual recurring revenue, powers enterprise adoption through its ability to handle complex coding tasks via AI orchestration5,11. The leaked code unveiled internals like agent loops, persistent memory implementation, 44 feature flags for unreleased features (e.g., always-on AI and a 'tamagotchi pet'), and system prompts, offering competitors insights into Anthropic's edge in agentic workflows5,8,11. In AI development, the harness differentiates products: it instructs the LLM on tool usage, applies safety constraints, and enables 'code operation' at scale, transforming engineers from coders to directors1,6,9.
Rapid iteration defines Anthropic's culture, with teams shipping 49 pull requests in two days using Claude Code paired with Opus 4.5 for nearly 100% of development-shifting from 80% manual in November 2025 to 80% AI-driven by December6. Boris Cherny, Claude Code's head, embodies this: his team programs 'in English,' directing AI like interns while humans handle prompting, customer coordination, and prioritization6,9. Yet this velocity amplifies risks; source maps, debugging artifacts mapping minified code to originals, should never reach production but did here due to a bypassed exclusion step2,5.
The strategic tension lies in balancing AI-accelerated speed with reliability in 'AI-native engineering.' Anthropic's workflow-where 'Claude writes Claude'-demands flawless infra to sustain 100% AI code generation without entropy buildup from AI hallucinations like over-abstraction or dead code6,9. Leaks erode trust in products relied upon by enterprises for secure coding, especially as Claude Code's harness enforces behavioral guardrails absent in raw LLMs1. Competitors could fork the leaked code, accelerating their agentic tools and commoditizing Anthropic's moat3,8.
Debates rage over culpability: Anthropic insists no breach occurred since no credentials leaked, framing it as procedural oversight1,5. Critics, including cybersecurity experts, argue publishing 512,000 lines publicly qualifies as a breach, enabling mass dissemination via GitHub forks (over 41,500)2,3. Security researcher Chaofan Shou's X post triggered global mirroring within minutes, turning a fixable error into permanent exposure2,5. Ethically, the 'Claude leak fallout' tests norms on handling leaked AI IP: is forking proprietary code innovation or theft?3
Objections to Anthropic's response center on downplaying impact. While no weights leaked, the harness reveals competitive secrets like multi-agent logic and unreleased flags, potentially aiding rivals in building superior agents8,11. A cybersecurity pro noted technical users could extract further internals, damaging more than the prior Mythos draft leak1. Internally, this underscores process gaps in high-velocity teams where AI amplifies human shortcuts2.
Cherny's philosophy-that mistakes stem from process, culture, or infrastructure, not individuals-directly addresses this, promoting collective accountability in AI teams6. In contexts like his, where engineers oversee AI outputting production code at breakneck speed, blaming people risks stifling innovation9. Instead, robust CI/CD pipelines, automated map stripping, and release gates prevent recurrence2. Research on human-AI teams emphasizes shared mental models and coordination; here, AI's role demands infra matching its scale10.
This approach matters amid AI's transformation of software engineering. CEOs like Dario Amodei predict models handling end-to-end dev in 6-12 months, yet Cherny counters engineers remain vital for oversight9,15. Studies show AI teammates reduce human productivity and coordination, as people anticipate less, bumping into AI 'errors'13. Anthropic's leaks validate this: unchecked velocity breeds slips, but process-focused cultures mitigate via 'AI reviews AI' and team safeguards6.
Broader implications extend to AI deployment challenges. Cross-functional teams blending data scientists, engineers, and domain experts are essential, yet siloed releases enable errors7. The leak, post a market-wiping product update from $340B-valued Anthropic, amplifies scrutiny on infra maturity11. As Claude Code prototypes like 'Clyde' evolve into public tools, hardening release processes becomes paramount12.
Legal fallout looms: proprietary code circulation raises IP claims, though open-source norms blur lines3. Blockchain analyses frame it as a 2026 case study in proprietary AI diffusion3. Anthropic's fixes-rolling measures like stricter packaging-aim to restore confidence, but disseminated code persists1.
Technologically, the harness's exposure demystifies agentic AI. It implements loops for task decomposition, tool calls, and memory persistence, enabling feats like 49 PRs/day6,8. Unreleased features hint at evolutions: always-on modes could enable real-time coding, while gamified elements like pets boost engagement5,11. This transparency accelerates industry progress, forcing Anthropic to innovate faster.
Culture plays a pivotal role. Cherny's optimism counters 'Slopacolypse' fears-AI entropy from unchecked errors-via self-review loops6. Yet leaks reveal cultural pressures: rushing npm uploads amid soaring adoption bypasses checks1,5. Team-centered AI demands responsiveness, awareness, and flexible planning, per models of interdependent work10. Anthropic's incident stresses investing in these for multi-team systems.
Why this endures as a cautionary tale: AI firms operate at internet speed, where one map file leaks fortunes in R&D. It matters because Claude Code isn't niche-it's a $2.5B ARR leader reshaping dev from keystrokes to prompts5. Process-over-person mindsets, as articulated, foster resilience: infra upgrades post-leak signal learning1.
Debates persist on AI's engineer displacement. Cherny insists pros are 'more important than ever' for strategy, while Amodei eyes full automation9. Leaks humanize the shift: even AI-native teams err, needing human guardrails. Columbia research confirms AI harms team dynamics, underscoring hybrid necessities13.
Strategically, this pressures Anthropic amid rivals. With Mythos looming, exposed harnesses invite cloning, eroding leads1. Yet it catalyzes infra evolution, aligning with Cherny's view: fix the system, not the culprit. In 2026's AI arms race, such resilience defines survivors.
Enterprise trust hinges on this. Firms adopting Claude Code for secure, agentic coding demand leak-proof delivery1. The incident, though contained, spotlights risks in open ecosystems like npm, where devs share billions of packages daily2. Mitigation via build hardening sets precedents.
Ultimately, the event crystallizes tensions in AI scaling: velocity vs. security, AI autonomy vs. oversight, individual slips vs. systemic fixes. Cherny's ethos guides forward: evolve processes to harness AI's power without self-sabotage. As teams like his propel 'programming in English,' fortified infra ensures mistakes fuel progress, not peril.
References
1. Anthropic rushes to limit the leak of Claude Code source code - https://www.moneycontrol.com/news/business/anthropic-rushes-to-limit-the-leak-of-claude-code-source-code-13877238.html
2. Anthropic leaks its own AI coding tool’s source code in second major security breach - 2026-03-31 - https://fortune.com/2026/03/31/anthropic-source-code-claude-code-data-leak-second-security-lapse-days-after-accidentally-revealing-mythos/
3. Anthropic accidentally exposes Claude Code source code - 2026-03-31 - https://www.theregister.com/2026/03/31/anthropic_claude_code_source_code/
4. Claude Leak Fallout: Legal and Ethical Risks (2026) - 2026-04-01 - https://www.blockchain-council.org/claude-ai/claude-leak-fallout-legal-ethical-implications-sharing-leaked-ai-source-code/
5. ? Anthropic accidentally leaked Claude Code's entire source code - 2026-04-01 - https://www.theneurondaily.com/p/anthropic-accidentally-leaked-claude-code-s-entire-source-code
6. Anthropic Just Leaked Claude Code's Entire Source Code - YouTube - 2026-03-31 - https://www.youtube.com/watch?v=OqG9Lk0rIgs
7. Programming's Demise? Claude Code Father's Bombshell Quotes ... - 2026-02-04 - https://eu.36kr.com/en/p/3668658715829123
8. Overcoming Challenges in AI Deployment - RTS Labs - 2024-11-27 - https://rtslabs.com/challenges-in-ai-deployment
9. Anthropic accidentally leaked Claude Code's source code. Here's ... - 2026-03-31 - https://dev.to/aws-builders/anthropic-accidentally-leaked-claude-codes-source-code-heres-what-that-means-2f89
10. Claude Code creator Boris Cherny says software engineers ... - ITPro - 2026-02-17 - https://www.itpro.com/software/development/claude-code-creator-boris-cherny-says-software-engineers-are-more-important-than-ever-as-ai-transforms-the-profession-but-anthropic-ceo-dario-amodei-still-thinks-full-automation-is-coming
11. [PDF] Human-AI teams—Challenges for a team-centered AI at work - 2023-09-27 - https://www.dfki.de/fileadmin/user_upload/import/14163_20231011_Team-Centered_AI_Paper_2023.pdf
12. $340 billion Anthropic that wiped trillions from stock market ... - 2026-04-01 - https://timesofindia.indiatimes.com/technology/tech-news/340-billion-anthropic-that-wiped-trillions-from-stock-market-worldwide-has-source-code-of-its-most-important-tool-leaked-on-internet/articleshow/129925824.cms
13. AI-Native Engineering: Inside Boris Cherny's Claude Code Workflow - 2026-03-20 - https://medium.programmerscareer.com/ai-native-engineering-inside-boris-chernys-claude-code-workflow-145e140a103f
14. Understanding How AI Affects Team Performance: Challenges and ... - 2023-07-10 - https://business.columbia.edu/insights/business-society/understanding-how-ai-affects-team-performance-challenges-and-insights
15. Anthropic inadvertently leaks source code for Claude Code CLI tool - 2026-03-31 - https://cybernews.com/security/anthropic-claude-code-source-leak/
16. A quote from Boris Cherny - Simon Willison's Weblog - 2026-02-14 - https://simonwillison.net/2026/Feb/14/boris/

|
| |
| |
"Artificial intelligence is reshaping the world. The question is not whether that transformation will happen, but who shapes it and under what conditions. " - Eric Schmidt - Former Google CEO
Eric Schmidt's incisive observation captures the essence of a pivotal moment in technological history, where artificial intelligence (AI) is not merely an emerging tool but a transformative force poised to redefine economies, governance, and human endeavour. As former CEO and Executive Chairman of Google, Schmidt brings unparalleled authority to this discussion, drawing from decades at the forefront of digital innovation. His words, shared via LinkedIn, underscore a critical tension: AI's evolution is inevitable, yet its trajectory hinges on deliberate human choices regarding governance, ethics, and strategic control.
Eric Schmidt: Architect of the Digital Age
Born in 1955, Eric Schmidt rose from humble beginnings as the son of a Princeton economics professor to become one of Silicon Valley's most influential figures. He earned degrees in electrical engineering from Princeton and computer science from the University of California, Berkeley, before embarking on a career that spanned enterprise software at Sun Microsystems and Novell. In 2001, Schmidt joined Google as CEO during its nascent phase, steering it from a search engine startup to a global tech behemoth valued in trillions. Under his leadership until 2011-and as Executive Chairman until 2015-Google pioneered breakthroughs in search algorithms, Android, YouTube, and early AI initiatives like Google Brain3,4.
Post-Google, Schmidt's influence extended into public policy and national security. He chaired the National Security Commission on Artificial Intelligence (NSCAI), advising the US government on maintaining technological supremacy amid geopolitical rivalries, particularly with China. His book The Age of AI: And Our Human Future (co-authored with Henry Kissinger and Daniel Huttenlocher) explores AI's societal implications, advocating balanced advancement. Schmidt has repeatedly warned of AI's dual-edged nature: immense potential for productivity surges-potentially 30% annual increases through agentic AI-but existential risks if unchecked, such as self-improving systems evading human control2,3.
In the context of this quote, Schmidt reflects on AI's maturation into autonomous agents capable of independent research, planning, and inter-agent communication. He envisions a world of 'AI scientists' outnumbering humans, accelerating innovation in fields like drug discovery and climate modelling, yet insists on human 'hands on the plug' to mitigate dangers like unchecked self-improvement1,2. This aligns with his calls for US leadership in the AI race against China, where recent parity in capabilities demands proactive safeguards2.
Leading Theorists on AI Governance and Human-AI Symbiosis
Schmidt's perspective resonates with foundational thinkers who have shaped AI discourse:
- Nick Bostrom: Oxford philosopher and author of Superintelligence (2014), Bostrom popularised concerns over the 'control problem'-ensuring superintelligent AI aligns with human values. He argues that AI's orthogonality thesis (intelligence independent of goals) necessitates robust governance to prevent misaligned outcomes, echoing Schmidt's unplugging imperative2.
- Stuart Russell: UC Berkeley professor and co-author of Artificial Intelligence: A Modern Approach, Russell champions 'human-compatible AI', where systems learn and defer to human preferences. His work on inverse reinforcement learning directly informs Schmidt's vision of human judgment amplifying machine cognition1.
- Henry Kissinger: Co-author with Schmidt, the former US Secretary of State highlights AI's geopolitical stakes, likening it to nuclear technology. Their dialogues emphasise international cooperation to democratise benefits while curbing concentration of power3.
- Ray Kurzweil: Google's Director of Engineering and singularity proponent, Kurzweil predicts AI-human merger via exponential growth (Moore's Law extended). While optimistic, he aligns with Schmidt on symbiosis, forecasting infinite context windows enabling collaborative superintelligence1,3.
- Sam Altman and Demis Hassabis: As OpenAI and DeepMind CEOs, they advance agentic AI with chain-of-thought reasoning and reinforcement learning-technologies Schmidt praises for enabling planning and strategy. Yet, they share his caution on scaling laws leading to unpredictable autonomy3.
These theorists converge on a consensus: AI as a 'multiplier' for human potential, not a replacement. Schmidt synthesises this into a pragmatic call-shaping AI under conditions of ethical oversight, interdisciplinary collaboration, and geopolitical vigilance ensures its promise amplifies humanity rather than supplants it1,3.
Broader Implications for Society and Strategy
Schmidt's quote arrives amid accelerating AI milestones: models with test-time compute for dynamic planning, synthetic data generation to overcome scarcity, and non-stationary objectives challenging adaptability3. In enterprise contexts, AI agents are automating business processes, from code generation to scientific discovery, slashing costs and boosting slopes of innovation3. Yet, risks loom-centralised power, opaque decision-making, and the sprint to superintelligence demand frameworks like those Schmidt advocates via NSCAI.
Ultimately, this insight challenges leaders to prioritise human-AI teaming: supercomputers for scale and speed, humans for purpose and prudence. As Schmidt notes, the race is not just technological but societal-who controls the shape of this transformation will define the next era2.
References
1. https://globaladvisors.biz/2025/11/21/quote-dr-eric-schmidt-ex-google-ceo/
2. https://www.foxbusiness.com/technology/former-google-ceo-eric-schmidt-calls-unplugging-ai-systems-when-reach-certain-capability
3. https://singjupost.com/transcript-of-the-ai-revolution-is-underhyped-eric-schmidt/
4. https://www.youtube.com/watch?v=id4YRO7G0wE
5. https://www.exponentialview.co/p/eric-schmidts-ai-prophecy

|
| |
|