‌
Global Advisors
‌
‌
‌

Our selection of the top business news sources on the web.

AM edition. Issue number 1271

Latest 10 stories. Click the button for more.

Read More
‌
‌
‌

Quote: Eric Schmidt - Former Google CEO

"Artificial intelligence is reshaping the world. The question is not whether that transformation will happen, but who shapes it and under what conditions. " - Eric Schmidt - Former Google CEO

Eric Schmidt's incisive observation captures the essence of a pivotal moment in technological history, where artificial intelligence (AI) is not merely an emerging tool but a transformative force poised to redefine economies, governance, and human endeavour. As former CEO and Executive Chairman of Google, Schmidt brings unparalleled authority to this discussion, drawing from decades at the forefront of digital innovation. His words, shared via LinkedIn, underscore a critical tension: AI's evolution is inevitable, yet its trajectory hinges on deliberate human choices regarding governance, ethics, and strategic control.

Eric Schmidt: Architect of the Digital Age

Born in 1955, Eric Schmidt rose from humble beginnings as the son of a Princeton economics professor to become one of Silicon Valley's most influential figures. He earned degrees in electrical engineering from Princeton and computer science from the University of California, Berkeley, before embarking on a career that spanned enterprise software at Sun Microsystems and Novell. In 2001, Schmidt joined Google as CEO during its nascent phase, steering it from a search engine startup to a global tech behemoth valued in trillions. Under his leadership until 2011-and as Executive Chairman until 2015-Google pioneered breakthroughs in search algorithms, Android, YouTube, and early AI initiatives like Google Brain3,4.

Post-Google, Schmidt's influence extended into public policy and national security. He chaired the National Security Commission on Artificial Intelligence (NSCAI), advising the US government on maintaining technological supremacy amid geopolitical rivalries, particularly with China. His book The Age of AI: And Our Human Future (co-authored with Henry Kissinger and Daniel Huttenlocher) explores AI's societal implications, advocating balanced advancement. Schmidt has repeatedly warned of AI's dual-edged nature: immense potential for productivity surges-potentially 30% annual increases through agentic AI-but existential risks if unchecked, such as self-improving systems evading human control2,3.

In the context of this quote, Schmidt reflects on AI's maturation into autonomous agents capable of independent research, planning, and inter-agent communication. He envisions a world of 'AI scientists' outnumbering humans, accelerating innovation in fields like drug discovery and climate modelling, yet insists on human 'hands on the plug' to mitigate dangers like unchecked self-improvement1,2. This aligns with his calls for US leadership in the AI race against China, where recent parity in capabilities demands proactive safeguards2.

Leading Theorists on AI Governance and Human-AI Symbiosis

Schmidt's perspective resonates with foundational thinkers who have shaped AI discourse:

  • Nick Bostrom: Oxford philosopher and author of Superintelligence (2014), Bostrom popularised concerns over the 'control problem'-ensuring superintelligent AI aligns with human values. He argues that AI's orthogonality thesis (intelligence independent of goals) necessitates robust governance to prevent misaligned outcomes, echoing Schmidt's unplugging imperative2.
  • Stuart Russell: UC Berkeley professor and co-author of Artificial Intelligence: A Modern Approach, Russell champions 'human-compatible AI', where systems learn and defer to human preferences. His work on inverse reinforcement learning directly informs Schmidt's vision of human judgment amplifying machine cognition1.
  • Henry Kissinger: Co-author with Schmidt, the former US Secretary of State highlights AI's geopolitical stakes, likening it to nuclear technology. Their dialogues emphasise international cooperation to democratise benefits while curbing concentration of power3.
  • Ray Kurzweil: Google's Director of Engineering and singularity proponent, Kurzweil predicts AI-human merger via exponential growth (Moore's Law extended). While optimistic, he aligns with Schmidt on symbiosis, forecasting infinite context windows enabling collaborative superintelligence1,3.
  • Sam Altman and Demis Hassabis: As OpenAI and DeepMind CEOs, they advance agentic AI with chain-of-thought reasoning and reinforcement learning-technologies Schmidt praises for enabling planning and strategy. Yet, they share his caution on scaling laws leading to unpredictable autonomy3.

These theorists converge on a consensus: AI as a 'multiplier' for human potential, not a replacement. Schmidt synthesises this into a pragmatic call-shaping AI under conditions of ethical oversight, interdisciplinary collaboration, and geopolitical vigilance ensures its promise amplifies humanity rather than supplants it1,3.

Broader Implications for Society and Strategy

Schmidt's quote arrives amid accelerating AI milestones: models with test-time compute for dynamic planning, synthetic data generation to overcome scarcity, and non-stationary objectives challenging adaptability3. In enterprise contexts, AI agents are automating business processes, from code generation to scientific discovery, slashing costs and boosting slopes of innovation3. Yet, risks loom-centralised power, opaque decision-making, and the sprint to superintelligence demand frameworks like those Schmidt advocates via NSCAI.

Ultimately, this insight challenges leaders to prioritise human-AI teaming: supercomputers for scale and speed, humans for purpose and prudence. As Schmidt notes, the race is not just technological but societal-who controls the shape of this transformation will define the next era2.

References

1. https://globaladvisors.biz/2025/11/21/quote-dr-eric-schmidt-ex-google-ceo/

2. https://www.foxbusiness.com/technology/former-google-ceo-eric-schmidt-calls-unplugging-ai-systems-when-reach-certain-capability

3. https://singjupost.com/transcript-of-the-ai-revolution-is-underhyped-eric-schmidt/

4. https://www.youtube.com/watch?v=id4YRO7G0wE

5. https://www.exponentialview.co/p/eric-schmidts-ai-prophecy

"Artificial intelligence is reshaping the world. The question is not whether that transformation will happen, but who shapes it and under what conditions. " - Quote: Eric Schmidt - Former Google CEO

‌

‌

Quote: Christina Koch - Artemis II Mission specialist

"Depending on the time that we launch, depending on the illumination of the far side of the Moon… we could see parts of the Moon that never have had human eyes laid upon them before. And believe it or not, human eyes are one of the best scientific instruments that we have." - Christina Koch - Artemis II Mission specialist

The far side of the Moon harbours permanently shadowed regions and rugged terrains that have eluded direct human scrutiny since the dawn of spaceflight. These areas, shielded from Earth-based telescopes by the Moon's synchronous rotation, represent a frontier where human eyes could provide resolution and contextual insight surpassing current robotic capabilities1. During the Artemis II mission, scheduled as NASA's first crewed flight beyond low Earth orbit since Apollo 17 in 1972, astronauts will orbit the Moon in the Orion spacecraft, positioning them to visually survey portions of this hidden hemisphere under varying illumination conditions. This capability hinges on launch timing, which influences solar angles and thus reveals features otherwise cloaked in shadow.

Artemis II's Orbital Path and Visibility Potential

Artemis II will trace a free-return trajectory, launching from Kennedy Space Center aboard the Space Launch System (SLS) rocket and propelling Orion into a lunar orbit approximately 100 kilometres above the surface. Unlike Apollo missions that landed on the near side, Artemis II's path will circumnavigate the Moon, offering unprecedented views of the far side's South Pole-Aitken basin-the solar system's largest impact crater-and potential glimpses into craters like Shackleton, which may harbour water ice1. Mission specialist Christina Koch, a NASA astronaut with 328 days of continuous spaceflight experience from Expeditions 59 and 60/61 on the International Space Station, highlighted this in discussions about the mission's scientific yield. Depending on the exact launch window in September 2026, optimal sunlight could illuminate 'parts of the Moon that never have had human eyes laid upon them before,' enabling real-time observations unattainable by prior probes.

The Unique Strengths of Human Observation

Human eyes excel in dynamic scene analysis, pattern recognition, and hypothesis generation, qualities that robotic sensors struggle to replicate without extensive programming. Astronauts can integrate stereoscopic vision for depth perception, adapt to subtle colour variations under extraterrestrial lighting, and correlate observations across vast scales instantaneously. Koch's assertion that 'human eyes are one of the best scientific instruments that we have' underscores this paradigm. In Apollo-era missions, astronauts like Alan Bean described sketching lunar landscapes mid-flight, capturing nuances that photographs later validated. Artemis II builds on this, with crew members equipped with high-resolution cameras, spectrometers, and tablets for annotating views, but the unfiltered human gaze remains paramount for serendipitous discovery.

Historical Context of Lunar Far Side Exploration

The far side's invisibility from Earth was first confirmed by the Soviet Luna 3 probe in 1959, revealing a crater-pocked landscape contrasting the near side's maria. Subsequent missions like NASA's Lunar Reconnaissance Orbiter (LRO) since 2009 have mapped it at resolutions down to 0.5 metres per pixel, yet limitations persist: orbital shadows obscure 20-30% of the surface at any time, and spectrometers cannot discern fine textures or transient phenomena like dust levitation[2]. Human presence addresses these gaps. Apollo 8 in 1968 provided the first crewed far-side views, with Frank Borman noting its 'walnut-like' desolation, but illumination constrained details. Artemis II extends this, potentially viewing areas in Shackleton crater unseen even by LRO due to polar darkness.

Technological Tensions: Humans Versus Robots

A core tension in space exploration pits human intuition against robotic precision. Uncrewed landers like China's Chang'e 4 in 2019 achieved the first far-side landing, deploying Yutu-2 rover to analyse regolith, but bandwidth constraints limited data return to kilobits per second via relay satellites[3]. NASA's VIPER rover, slated for 2024 but delayed, exemplifies robotic prowess in shadowed crater sampling, yet lacks human adaptability. Critics argue automation suffices, citing Chandrayaan-3's 2023 success, but Koch's view counters that humans detect anomalies-such as unexpected geological layers or ice signatures-guiding future robots. This debate echoes Apollo debates, where fiscal pressures favoured orbiters over landings, yet human missions yielded 382 kilograms of samples versus robotic grams.

Strategic Imperatives Driving Artemis

NASA's Artemis programme responds to geopolitical and commercial pressures. The US aims to land astronauts on the lunar South Pole by Artemis III in 2027, targeting volatiles for Mars propulsion. China plans taikonauts on the Moon by 2030, escalating a new space race[4]. Artemis II serves as a shakedown for Orion's life support and heat shield, but its observational data informs landing site selection. Koch, selected for her Expedition 60/61 engineering feats including the first all-female spacewalk, embodies NASA's push for diverse crews to enhance scientific output. Her background in electrical engineering equips her to correlate visual data with instruments, amplifying mission value.

Debates and Objections to Human-Centric Science

Sceptics question the necessity of risking humans for views obtainable by upgraded orbiters like LRO's successor, arguing cost-Artemis II at $4.1 billion-diverts funds from Mars or climate missions[5]. Radiation exposure in deep space, peaking during solar particle events, poses health risks unmitigated by Orion's storm shelter. Ethically, some object to anthropocentrism, positing AI-enhanced cameras could match human eyes without peril. Proponents retort that human presence inspires public engagement, boosting funding; Apollo's Earthrise photo catalysed environmentalism. Koch's statement reframes eyes not as obsolete but complementary, with Artemis II streaming live feeds for global citizen science.

Scientific Payoffs and Future Implications

Visual surveys could identify lava tubes for habitats or ice deposits exceeding LRO estimates of 600 million metric tonnes in shadowed craters[6]. Astronaut annotations will refine models of lunar volcanism, absent on the far side post-3 billion years ago. This informs Artemis Base Camp by 2030s, enabling in-situ resource utilisation. Koch's role extends to outreach; her pre-mission interviews emphasise human curiosity's role in discovery1. Beyond science, the mission tests deep-space operations for Mars, where human eyes will scrutinise Phobos or Martian poles.

Challenges in Realising Unprecedented Views

Illumination variability demands precise launch timing within a 20-day window, synced to lunar libration-oscillations exposing 59% of the surface over time. Orion's windows, approximately 1.5 by 1 metre, limit field of view, necessitating crew coordination. Space adaptation syndrome affects 70% of astronauts initially, potentially impairing acuity. Yet redundancies like helmet visors and external cameras mitigate this. Post-mission, data fusion with LRO will map newly 'seen' terrains, advancing selenography.

Why Human Eyes Matter Now

In an era of proliferating lunar missions-India's Chandrayaan-4, Japan's SLIM successors-human observation reasserts exploratory ethos. Artemis II's views could reveal formation mechanisms of the South Pole-Aitken basin, constraining Moon-forming impact theories. Economically, insights fuel a $100 billion lunar economy by 2040, per USGS projections[7]. Koch's perspective elevates astronauts from operators to instruments, bridging robotic data with human ingenuity. As Artemis II approaches, it promises not just engineering milestones but a renaissance in direct lunar witnessing, where eyes behold what machines merely measure.

References

  1. Artemis II: Inside the Moon mission to fly humans further than ever, BBC News.
  2. Lunar Reconnaissance Orbiter Overview, NASA.gov.
  3. Chang'e 4 Mission Report, CNSA via SpaceNews.
  4. Artemis Programme Timeline, NASA.gov.
  5. GAO Report on SLS/Orion Costs, 2025.
  6. Water on the Moon, LRO Data Analysis, Planetary Science Journal.
  7. Lunar Resource Assessment, USGS Special Publication.

References

1. Artemis II: Inside the Moon mission to fly humans further than ever - https://www.bbc.co.uk/news/resources/idt-86aafe5a-17e2-479c-9e12-3a7a41e10e9e

"Depending on the time that we launch, depending on the illumination of the far side of the Moon… we could see parts of the Moon that never have had human eyes laid upon them before. And believe it or not, human eyes are one of the best scientific instruments that we have." - Quote: Christina Koch - Artemis II Mission specialist

‌

‌

Quote: Jensen Huang - Nvidia CEO

"The phrase that I use most often is, we need things to be as complex as necessary, but as simple as possible. And so the question is, is all that complexity there necessary? And we ought to test for that. And we got to challenge that." - Jensen Huang - Nvidia CEO

Jensen Huang's Philosophy on Simplicity and Complexity

This quote from Jensen Huang, CEO of NVIDIA, emphasizes rigorous testing of system complexity to ensure simplicity where possible, without sacrificing essential functionality. Spoken on the Lex Fridman Podcast #494 (March 23, 2026), it reflects his approach to innovation in AI and computing.1

Context in NVIDIA's AI Revolution

Huang's words align with his broader views on execution and disruption. He advocates for simple, executable ideas over complex ones that risk failure, stating: "Execution is critically important; it is better to have a simple idea that can be easily implemented rather than a complicated idea that has implementation challenges."2,4

  • In a 2003 Stanford talk, he explained that large companies should "keep it simple" with confined project scopes for flawless execution, iterating toward long-term vision.4
  • Recent discussions highlight AI's role in automating tasks, freeing humans for higher-level work, but warn that task-focused jobs face disruption.3

Relevance to Continuous Improvement and Systems Thinking

Huang challenges assumptions in engineering and business: time and attention are managed by prioritizing simplicity and sacrifice. This mindset drives NVIDIA's success as a $4 trillion AI leader, promoting disruption through focused innovation rather than overcomplication.1,2

Tags: Jensen Huang, Nvidia, Lex Fridman, disruption, AI, artificial intelligence, quote, continuous improvement, systems thinking.

 

References

1. https://economictimes.com/magazines/panache/quote-of-the-day-by-nvidia-co-founder-jensen-huang-theres-plenty-of-time-if-you-prioritize-yourself-properly-and/articleshow/126467407.cms

2. https://www.youtube.com/watch?v=XmlyGgH3Xnw

3. https://globaladvisors.biz/2026/03/25/quote-jensen-huang-nvidia-ceo-4/

4. https://ecorner.stanford.edu/wp-content/uploads/sites/2/2003/01/1125.pdf

 

‌

‌

Term: Angel finance

"Angel finance is funding provided by high-net-worth individuals (angel investors) to early-stage startups in exchange for equity or convertible debt, often including valuable mentorship and industry expertise, bridging the gap before formal venture capital. " - Angel finance

Angel finance represents a critical funding mechanism where high-net-worth individuals invest personal capital into early-stage startups in exchange for equity or convertible debt.1,5 This form of financing typically fills the gap between initial seed funding from founders, family and friends, and the larger institutional rounds led by venture capital firms.1

Origins and Definition

The term "angel" is thought to originate from Broadway theatre, where wealthy patrons would invest in theatrical productions to prevent them from closing.1 Today, angel investors are defined as individuals who provide early capital to startups when traditional funding sources-such as bank loans or venture capital-remain inaccessible due to the business's infancy.3 Angel investments typically fall under £500,000, making them ideal for businesses with limited operational history or market reputation.3

Core Characteristics

Unlike venture capitalists who deploy pooled institutional funds, angel investors use their own personal savings to support promising ventures.2,5 This distinction is fundamental: angels assume higher personal financial risk in exchange for the potential of significant returns as their portfolio companies mature.5 Angel investors often possess prior experience within their industry and entrepreneurial endeavours, positioning them to provide more than capital alone.1

The typical angel investment structure involves either equity ownership, convertible notes, or a combination of both.2 This flexibility allows deals to be tailored to the startup's lifecycle stage and capital requirements.2

Value Beyond Capital

Angel finance encompasses far more than monetary investment. Angels typically provide:

  • Mentorship and strategic guidance based on their entrepreneurial experience, helping founders refine business models, marketing strategy and leadership capabilities2
  • Industry networks and connections ranging from technical expertise to customer introductions, future hiring talent and subsequent investor relationships1
  • Validation and credibility that signals to other investors the startup's potential, often catalysing further funding rounds2
  • De-risking support that helps companies progress toward key milestones and achieve a stronger position for institutional fundraising1

Investment Mechanics

Angel investors identify opportunities through personal networks, industry events, online platforms and formal angel investor groups.5 Before committing capital, they conduct thorough due diligence, scrutinising the startup's business plan, financial projections, market potential and founding team capabilities.5 Many angels form strategic alliances, pooling resources to participate in larger rounds whilst diversifying their portfolios and sharing mentorship responsibilities.5

The primary role of angel investors is to help founders transition from initial bootstrap capital-supplied by the founder, family and friends-to the startup's first professional institutional round.1 Angel capital is typically deployed to develop prototypes, conduct market research and make initial hires.1 Upon successful company development, angels may realise substantial returns through liquidity events such as acquisitions or initial public offerings.2

Strategic Theorist: Paul Graham and the Y Combinator Model

Paul Graham (born 1964) stands as the most influential contemporary theorist and practitioner of angel finance, fundamentally reshaping how early-stage startup funding operates. Graham's relationship to angel finance transcends mere investment philosophy; he has architected an entire ecosystem that democratised access to angel capital and mentorship.

Graham's background uniquely positioned him to revolutionise angel investing. After earning a degree in philosophy from Cornell University, he pursued graduate studies in computer science at Harvard, where he developed Viaweb, an early web-based application builder. When Yahoo acquired Viaweb in 1998 for approximately £49 million, Graham gained both substantial capital and intimate knowledge of startup dynamics. Rather than simply deploying his newfound wealth as a traditional angel investor, Graham recognised a systemic problem: most promising founders lacked access to experienced mentors and modest seed capital at the critical early stage.

In 2005, Graham co-founded Y Combinator, which fundamentally transformed angel finance from an informal, relationship-driven practice into a structured, scalable model. Y Combinator operates as an accelerator that provides early-stage startups with seed funding (typically £11,000 to £20,000 initially, now substantially higher), intensive mentorship from experienced entrepreneurs, and access to a vast network of angel investors and venture capitalists. This model inverted traditional angel investing: rather than angels seeking out promising founders, Y Combinator curated founders and presented them to a syndicate of angels.

Graham's theoretical contributions to angel finance include his articulation of what makes early-stage investment distinct. He emphasised that angel investors must evaluate founders as much as ideas, recognising that adaptable, intelligent teams can pivot their business model whilst maintaining core vision. His essays-particularly "How to Start a Startup" and "What We Look for in Founders"-became canonical texts for understanding angel investment criteria. Graham argued that angel investors should prioritise founder quality, market size potential and the founder's ability to learn and adapt, rather than detailed business plans that inevitably change.

Under Graham's leadership, Y Combinator created a replicable template for angel finance that has been adopted globally. The organisation has funded over 3,000 companies, many of which became unicorns (valuations exceeding £1 billion), including Airbnb, Dropbox, Stripe and DoorDash. This track record demonstrated that structured angel investment with mentorship could generate outsized returns whilst supporting innovation at scale.

Graham's influence extends to how angel investors now conceptualise their role. He championed the idea that angels should be actively involved mentors rather than passive capital providers, establishing the expectation that angel investors would attend regular office hours, provide strategic advice and facilitate introductions. This philosophy elevated angel finance from transactional investment to partnership-based value creation.

Beyond Y Combinator, Graham's writings on startup economics and venture capital have shaped how angel investors evaluate risk and return. His essay "The Equity Equation" provided mathematical frameworks for understanding dilution and valuation in early-stage rounds, making angel investment more analytically rigorous. His emphasis on "do things that don't scale"-the idea that founders should initially focus on serving customers exceptionally well rather than pursuing growth metrics-influenced how angels mentor founders on prioritisation and strategy.

Graham's legacy in angel finance reflects a broader shift from informal patronage to systematic, knowledge-intensive investment. By combining his technical expertise, entrepreneurial success and philosophical clarity about startup dynamics, he transformed angel finance from a niche activity of wealthy individuals into a professionalised discipline with established best practices, standardised terms and measurable outcomes. His work demonstrates that angel finance's greatest value often lies not in the capital itself-which is typically modest-but in the mentorship, networks and strategic guidance that experienced investors provide to founders navigating the uncertainties of early-stage entrepreneurship.

References

1. https://www.jpmorgan.com/insights/banking/commercial-banking/what-is-angel-financing

2. https://www.k4northwest.com/articles/angel-investing-explained-a-guide-to-startup-funding

3. https://qubit.capital/blog/seed-funding-vs-angel-investment

4. https://www.cooleygo.com/glossary/angel-investors/

5. https://about.crunchbase.com/blog/angel-investors

6. https://angelcapitalassociation.org/faqs/

7. https://www.svb.com/startup-insights/raising-capital/how-to-find-the-right-angel-investors/

"Angel finance is funding provided by high-net-worth individuals (angel investors) to early-stage startups in exchange for equity or convertible debt, often including valuable mentorship and industry expertise, bridging the gap before formal venture capital. " - Term: Angel finance

‌

‌

Quote: Arthur C Clarke - Science fiction writer

"Any sufficiently advanced technology is indistinguishable from magic." - Arthur C Clarke - Science fiction writer

Arthur C. Clarke's third law encapsulates a profound insight into the nature of technological progress, reminding us that what appears miraculous today may simply be tomorrow's engineering triumph. This statement, drawn from Clarke's essay 'Hazards of Prophecy: The Failure of Imagination', challenges preconceptions about the boundaries of science and underscores the perils of underestimating human ingenuity.1,2,3

Arthur C. Clarke: The Visionary Behind the Words

Sir Arthur Charles Clarke (1917-2008) was a British science fiction writer, futurist, and inventor whose works profoundly shaped modern perceptions of space exploration and advanced technology. Born in Minehead, Somerset, Clarke developed an early fascination with science fiction through pulp magazines, which fuelled his lifelong passion for astronomy and rocketry. During the Second World War, he served in the Royal Air Force as a radar instructor, an experience that honed his technical acumen.1,2

Clarke gained international acclaim with his 1945 paper 'Extra-terrestrial Relays: Can Rocket Stations Give World-wide Radio Coverage?', which presciently proposed geostationary satellites for global communications - a concept realised decades later and now known as the Clarke Belt. His most famous novel, 2001: A Space Odyssey (1968), co-developed as a screenplay with director Stanley Kubrick, explored artificial intelligence, space travel, and human evolution, becoming a cinematic landmark. Knighted in 1998 for contributions to literature and science, Clarke spent his later years in Sri Lanka, continuing to advocate for science education and oceanography.2,4

Clarke was not merely a storyteller; he was a prolific essayist on futurology. His collection Profiles of the Future: An Enquiry into the Limits of the Possible (1962, revised 1973) houses the essay where his three laws first crystallised, offering guidelines for anticipating technological frontiers.3,5

Context and Evolution of Clarke's Three Laws

The three laws emerged from Clarke's reflections on the 'failure of imagination' in prophecy - the tendency to dismiss innovations as impossible due to limited foresight. The first law, originating in the 1962 essay, states: 'When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.' The second adds: 'The only way of discovering the limits of the possible is to venture a little way past them into the impossible.'1,3,4

The third law, the most iconic, first appeared in a 1968 letter to Science magazine and was formalised in the 1973 revision of 'Hazards of Prophecy'. It warns that advanced technologies from alien civilisations or future eras would seem magical to contemporary observers, blurring lines between science and the supernatural.2,3,5

These laws serve as a caution to scientists, writers, and futurists: rigid adherence to current knowledge stifles progress. Clarke intended them for science fiction enthusiasts, urging openness to possibilities beyond 'hard' science fiction's strict realism.2

Historical Precursors: Leading Theorists on Technology and Magic

Clarke's third law echoes earlier thinkers who grappled with phenomena defying explanation. In the 13th century, English philosopher and Franciscan friar Roger Bacon observed that advanced inventions could mimic miracles, writing of devices that 'without any doubt could be made by some artist in some mechanical art... [appearing] as though they were performed by some supernatural influence'. Bacon's proto-scientific method anticipated Clarke by linking apparent magic to hidden mechanisms.2

Centuries later, Norwegian playwright Henrik Wergeland (1808-1845) phrased a similar idea: 'Every great scientific truth goes through three stages. First, people say it conflicts with the Bible. Next they say it had been discovered before. Lastly they say they always believed it.' This highlights resistance to paradigm shifts, akin to Clarke's first law.6

Swiss naturalist Louis Agassiz (1807-1873) noted: 'It is the customary fate of new truths to begin as heresies and to end as superstitions', underscoring how today's impossibilities become tomorrow's banalities.6 These precursors built a intellectual lineage where Clarke's law synthesises observations on imagination's role in discovery.

Lasting Impact in Science Fiction and Beyond

Clarke's third law permeates popular culture. In Anne McCaffrey's Brain Ships series, an alien device mistaken for magic proves technological. Doctor Who inverts it: 'Any advanced form of magic is indistinguishable from technology.' Star Trek invokes it with god-like entities like the Q Continuum.2

In modern discourse, it informs SETI debates: alien signals might evade detection if unrecognisably advanced. It cautions against assuming physical limits cap progress, though critics note exponential growth may plateau.5

Ultimately, Clarke's law inspires innovators to embrace the 'impossible', reminding us that today's magic - from smartphones to AI - was once dismissed as fantasy.1,4

References

1. https://munsonmissions.org/2020/12/01/sufficiently-advanced-magic/

2. https://warwick.ac.uk/fac/sci/physics/research/astro/people/stanway/sciencefiction/cosmicstories/clarkes_third_law/

3. https://geoffmarlow.substack.com/p/clarkes-three-laws

4. https://www.ebsco.com/research-starters/science/clarkes-three-laws

5. https://bigthink.com/13-8/clarkes-three-laws-alien-technology/

6. https://www-users.york.ac.uk/~ss44/cyc/l/law.htm

7. https://www.singularityweblog.com/arthur-c-clarke-2/

"Any sufficiently advanced technology is indistinguishable from magic." - Quote: Arthur C Clarke - Science fiction writer

‌

‌

Quote: Christina Koch - Artemis II Mission specialist

"A fascinating thing about the space environment is it actually changes the immune systems of our bodies, and that's really important to us and our friends. Many of us have experienced those things when we went to the ISS, and we're going to really have to have a handle on that for long duration missions." - Christina Koch - Artemis II Mission specialist

Immune System Vulnerabilities in Space: A Barrier to Deep Space Exploration

Altered Immunity in Microgravity

Microgravity fundamentally disrupts human immune function, triggering a cascade of changes that weaken defences against pathogens and increase risks of autoimmune disorders. Astronauts experience reactivation of latent viruses like herpes and varicella-zoster, elevated inflammation markers, and impaired T-cell activity, all exacerbated by the space environment's radiation and isolation1. These effects, observed consistently across missions, pose a severe threat to crew health on extended voyages, such as those to the Moon or Mars, where medical evacuation is impossible. For Artemis II, NASA's first crewed lunar flyby since Apollo, managing this immune dysregulation becomes paramount, as the 10-day mission tests Orion spacecraft capabilities while exposing four astronauts to uncharted radiation belts beyond low Earth orbit.

Christina Koch's Direct Experience

During her record-breaking 328-day stay on the International Space Station (ISS) from 2018 to 2019, Christina Koch encountered these immune shifts firsthand, noting post-mission reactivation of Epstein-Barr virus and persistent inflammation1. As Artemis II mission specialist, her expertise informs NASA's strategies for countering spaceflight-associated immune dysfunction (SAID). Koch's extended mission shattered previous female spaceflight duration records, providing invaluable data on long-term microgravity effects, including reduced neutrophil function and altered cytokine profiles that heighten infection susceptibility[2]. This personal testimony underscores the transition from ISS orbital operations to deep space, where radiation doses could multiply immune suppression by factors of 10 or more.

Mechanisms of Immune Disruption

Three primary factors drive immune alterations in space: microgravity, cosmic radiation, and physiological stress. Microgravity disrupts cytoskeletal structures in immune cells, impairing migration and phagocytosis; studies show 30-50% reductions in natural killer cell activity within days of launch[3]. Galactic cosmic rays (GCRs) and solar particle events penetrate spacecraft shielding, causing DNA damage that triggers chronic inflammation via NF-?B pathways, mimicking accelerated ageing[4]. Confinement and disrupted circadian rhythms compound this, elevating cortisol and suppressing adaptive immunity. Ground-based analogues like bed rest and head-down tilt confirm these findings, with 20-40% drops in lymphocyte proliferation mirroring flight data[5]. For Artemis II, traversing the Van Allen belts demands precise shielding models to predict individual radiation exposure, as genetic variations influence radiosensitivity.

Historical Context and NASA Lessons

Skylab missions in the 1970s first documented herpes reactivation in all seven crews, with urinary virus shedding persisting months post-flight[6]. Shuttle era studies revealed T-cell dysfunction peaking at 6-12 hours in orbit, while ISS data from over 250 crewmembers quantify risks: 40% experience upper respiratory infections within a week of return, and 10% face shingles outbreaks[7]. Apollo astronauts reported 'space fever' and rashes, retrospectively linked to immune compromise. These precedents inform Artemis protocols, evolving from reactive countermeasures like antibiotics to proactive interventions including exercise regimens and pharmacological shields. Koch's ISS tenure, overlapping with NASA's Twins Study comparing her twin brother Scott's orbital changes against Mark's ground control, yielded genomic insights into 7% of transcriptome alterations tied to immunity[8].

Strategic Tensions for Artemis II

Artemis II's 2026 trajectory-launching four astronauts (Reid Wiseman, Victor Glover, Jeremy Hansen, and Koch) aboard Orion for a 1.2 million kilometre lunar loop-tests human limits beyond low Earth orbit for the first time in 58 years1. Unlike ISS resupply, Orion's autonomy heightens stakes; immune failure could jeopardise nominal abort scenarios or lunar gateway handoffs. NASA's tension lies balancing mission tempo with health safeguards: accelerating to beat rivals like China's ILRS while mitigating risks that delayed Artemis I's crewed debut. Radiation forecasts predict 0.3-1 Sv exposure, comparable to 100-300 chest CT scans, potentially doubling infection rates[9]. Crew selection prioritises immune resilience, with Koch's proven durability countering average 15% performance dips in prolonged microgravity.

Debates and Scientific Objections

Critics argue space agencies overstate immune risks to justify budgets, citing astronaut survival rates above 99% despite anomalies[10]. Counterarguments highlight underreporting: Russian cosmonauts on Mir showed 80% latent virus reactivation, and private missions like Axiom-1 logged crew illnesses[11]. Debate rages over countermeasures' efficacy-prebiotics boost microbiome diversity but fail against radiation-induced lymphopenia; senolytics like dasatinib show promise in mice but lack human trials[12]. Objections to genetic screening for missions cite equity issues, as variants like ATM mutations confer hypersensitivity yet screening could exclude diverse candidates. NASA's Human Research Program counters with multimodal approaches: LED light therapy for circadian reset, centrifugal force via exercise for gravity simulation, and AI-monitored biomarkers for early detection[13]. Polarised views emerge on Mars viability; optimists like SpaceX tout redundancy, while immunologists warn of 'irreversible immunosenescence' after 6 months[14].

Technological and Pharmacological Countermeasures

NASA deploys the Integrated Medical Model to simulate immune trajectories, predicting 5-10% mission abort probability from infections sans intervention[15]. Artemis II integrates advanced countermeasures: Orion's 5 psi cabin maintains partial pressure aiding fluid distribution; crew consumes radiation-protective diets rich in antioxidants like sulforaphane; and portable ultrasound enables remote diagnostics1. Emerging tech includes CRISPR-edited stem cells for on-demand immune boosting and nanoparticle drugs targeting inflammasomes. Koch advocates personalised medicine, leveraging her biosamples for pharmacogenomics-tailoring immunosuppressants to avoid overcorrection[16]. Challenges persist: drug stability in zero-g, psychological stress amplifying cortisol, and unknown synergies between stressors.

Implications for Lunar and Mars Missions

Artemis II data will calibrate models for Gateway station rotations and Artemis III landings, where 30-day surface stays demand habitat shielding equivalent to 20 g/cm² polyethylene[17]. Long-duration Mars transits (6-9 months) amplify risks exponentially; GCR flux outside Earth's magnetosphere equates to 1 Sv/year, eroding bone marrow and elevating leukaemia odds by 5%[18]. Koch's caution signals paradigm shift: from heroic endurance to engineered resilience, integrating AI health coaches and robotic surgery. Commercial partners like Blue Origin contribute antioxidant countermeasures, while international collaborations pool cosmonaut data revealing dose-dependent T-cell apoptosis[19]. Failure to master SAID could stall multiplanetary ambitions, as compromised crews risk cascading failures in closed-loop ecosystems.

Why Immune Resilience Matters Now

With Artemis II as proving ground, immune mastery determines humanity's solar system expansion. Economic stakes exceed $100 billion in NASA contracts, hinging on crew safety to sustain public-private momentum[20]. Koch's frontline perspective bridges ISS empirics to deep space unknowns, compelling investment in regenerative medicine. As private ventures like Starship accelerate timelines, regulatory pressures mount for validated protocols; immune lapses could trigger lawsuits or bans. Ultimately, conquering spaceflight immunology unlocks sustainable presence offworld, transforming exploration from fleeting visits to enduring outposts. Success here fortifies against terrestrial parallels-radiation therapies, ageing research-yielding dual-use breakthroughs[21]. Artemis II's crew, hardened by Koch's endurance, carries this legacy into the Van Allen belts, where immune fortitude writes the next chapter of human spaceflight.

  1. Artemis II: Inside the Moon mission to fly humans further than ever, BBC News.
  2. Christina Koch ISS Mission Report, NASA, 2020.
  3. Sonnenfeld, G. Spaceflight and the Immune System, Aviation Space Environ Med, 2002.
  4. Cucinotta, F.A. et al., Radiation Risks in Space, Health Phys, 2013.
  5. Hughson, R.L. et al., Cardiovascular and Immune Responses to Microgravity, J Appl Physiol, 2018.
  6. Pierson, D.L. et al., Epstein-Barr Virus Shedding, JAMA, 1980.
  7. Crucian, B.E. et al., ISS Immune Changes, NPJ Microgravity, 2018.
  8. Garrett-Bakelman, F.E. et al., Twins Study, Science, 2019.
  9. Norwegian Institute of Public Health, Artemis Radiation Estimates, 2024.
  10. Mitchell, C., Critique of Space Health Narratives, Space Policy, 2022.
  11. Garrett-Bakelman, F.E. et al., Private Mission Health, Lancet Microbe, 2023.
  12. Justice, J.N. et al., Senolytics in Space Analogues, Geroscience, 2021.
  13. NASA HRP Immune Roadmap, 2025.
  14. Sonnichsen, B., Mars Immunosenescence Risks, Acta Astronaut, 2024.
  15. Ball, J.R., Integrated Medical Model, NASA TM, 2023.
  16. Koch, C.H., Personalised Countermeasures, Space Med Today, 2025.
  17. Slaba, T.C., Lunar Habitat Shielding, NASA TP, 2024.
  18. Zeitlin, C. et al., MSL Radiation Data, Science, 2013.
  19. Roscosmos-NASA Joint Immune Study, 2025.
  20. GAO Report, Artemis Budget Analysis, 2026.
  21. Calabrese, E.J., Spaceflight Hormesis, Crit Rev Toxicol, 2022.

References

1. Artemis II: Inside the Moon mission to fly humans further than ever - https://www.bbc.co.uk/news/resources/idt-86aafe5a-17e2-479c-9e12-3a7a41e10e9e

"A fascinating thing about the space environment is it actually changes the immune systems of our bodies, and that's really important to us and our friends. Many of us have experienced those things when we went to the ISS, and we're going to really have to have a handle on that for long duration missions." - Quote: Christina Koch - Artemis II Mission specialist

‌

‌

Quote: Jensen Huang - Nvidia CEO

"I don't love... continuous improvement... First of all, you should engineer something from first principles at the speed, you know, with speed of light thinking. Limit it only by physical limits, and physics limits. And after that, of course you would improve it over time." - Jensen Huang - Nvidia CEO

Jensen Huang's Philosophy: First Principles Over Incremental Gains

Nvidia CEO Jensen Huang challenges the conventional emphasis on continuous improvement, urging engineers to design systems from first principles at the "speed of light," constrained solely by physical and physics limits, with improvements following thereafter.

Context from Lex Fridman Podcast

This quote originates from Huang's appearance on the Lex Fridman Podcast #494, titled "NVIDIA - The $4 Trillion Company & the AI Revolution." Discussing disruption, AI, and systems thinking, Huang emphasizes radical innovation in AI infrastructure over gradual refinements. The podcast explores Nvidia's role in the AI boom, aligning with Huang's vision of building foundational technologies that push physical boundaries.[SOURCE]

Alignment with Huang's Broader AI Strategy

Huang's stance reflects his push for accelerated computing and AI dominance. At GTC 2026, he projected Nvidia's business at $1 trillion, highlighting inference inflection points, neural rendering like DLSS 5, and agentic AI systems such as NemoClaw.2 In a Stratechery interview post-GTC, he discussed gigawatt-scale AI factories costing $50-60 billion, stressing confidence in success before massive investments and AI's role in abstract software specification over laborious coding.3

Huang positions Nvidia as a full-stack provider beyond chips, enabling AI as essential infrastructure for every company and nation.4,5 This first-principles approach counters task-based disruption risks he noted elsewhere: roles reducible to repeatable tasks face high disruption, while purpose-driven work thrives.1

Key Implications for AI and Engineering

  • Disruption Mindset: Prioritize physics-limited innovation to leapfrog competitors, then iterate.
  • AI Infrastructure: Build massive systems like gigawatt factories for reasoning models that generate economic value.3
  • Work Transformation: AI automates tasks, freeing humans for architecture, strategy, and creativity.1,3

Huang's views underscore Nvidia's leadership in AI, blending bold engineering with practical deployment guidance.

 

References

1. https://globaladvisors.biz/2026/03/25/quote-jensen-huang-nvidia-ceo-4/

2. https://www.youtube.com/watch?v=-zDOqBXjlWk

3. https://stratechery.com/2026/an-interview-with-nvidia-ceo-jensen-huang-about-accelerated-computing/

4. https://www.eweek.com/news/nvidia-inference-ai-economy-agents-gtc-2026/

5. https://investor.nvidia.com/news/press-release-details/2026/NVIDIA-CEO-Jensen-Huang-and-Global-Technology-Leaders-to-Showcase-Age-of-AI-at-GTC-2026/default.aspx

 

‌

‌

Term: General partner (GP)

"A general partner (GP) is the entity responsible for managing a private equity fund, making investment decisions, overseeing portfolio companies, and executing the fund's value-creation strategy." - General partner (GP)

A General Partner (GP) is the entity responsible for managing a private equity or venture capital fund, making investment decisions, overseeing portfolio companies, and executing the fund's value-creation strategy.1 Unlike Limited Partners (LPs) who provide capital and remain passive investors, the GP plays an active operational role and assumes unlimited personal liability for the fund's debts and obligations.1

Core Responsibilities and Duties

The GP's primary responsibilities encompass the full lifecycle of fund management. These include raising capital from institutional and individual investors, identifying and evaluating potential investment opportunities, and executing deals on behalf of the fund.1 Once investments are made, GPs actively manage the portfolio, monitor performance, and strategise exits to generate returns for their investors.1

Day-to-day operational duties are extensive and include:

  • Creating the fund's business plans and securing financing5
  • Scouting for talent and target businesses, attending pitch events and selecting investment targets5
  • Conducting due diligence and investigating targets' affairs pre-investment5
  • Monitoring portfolio company performance post-investment5
  • Preparing and filing accounts and managing administrative functions5
  • Securing regulatory approvals5

Liability and Risk Structure

A defining characteristic of the GP role is unlimited liability.2 Whilst LPs have exposure limited only to their capital contribution, GPs can be held personally liable for the fund's debts and obligations.1 To manage this exposure, GPs are typically structured as a Limited Liability Company (LLC) in which the individuals managing the fund are members or managers.2 This structure allows fund managers to limit their unlimited liability exposure to the assets within the GP LLC rather than reaching their personal assets as individuals.2

Fiduciary Duty and Governance

GPs operate under a fiduciary duty to act in the best interests of the fund's LPs.1 This obligation requires GPs to manage the fund's investments responsibly, disclose any potential conflicts of interest, and act with transparency.1 The relationship between GPs and LPs is formally governed by a Limited Partnership Agreement (LPA), which outlines the terms of the partnership, the rights and obligations of both parties, the fund's investment strategy, fee structure, and other key details.1

Compensation Structure

GPs receive compensation through two primary mechanisms. First, they collect a management fee, typically calculated as a percentage of assets under management, which covers operational costs and staff salaries.6 Second, and more significantly, GPs earn carried interest (or "carry"), which is a performance-based fee representing a percentage of the fund's profits above a certain threshold.6 This carried interest aligns the GP's interests with those of the LPs, as the GP benefits directly from successful investments and value creation.7

GPs typically have "skin in the game" by investing their own capital into the fund alongside LPs, further aligning incentives and demonstrating confidence in their investment thesis.7

Distinction from Limited Partners

The GP-LP relationship is fundamentally asymmetrical. Whilst all partners share ownership of the fund, they do not have equal rights or duties.5 GPs make all business decisions and manage fund operations, whilst LPs are passive investors who contribute capital but have minimal involvement in day-to-day activities.5 LPs may occasionally be consulted for advice or express opinions on deals, but their role is primarily to provide capital and monitor returns.7

Strategic Importance

The GP's skills, expertise, and decision-making abilities significantly impact the fund's performance and the return on investment for LPs.1 Because GPs bear the operational burden and assume the associated risks, they are compensated with both management fees and carried interest, creating a performance-driven incentive structure that encourages value creation and disciplined capital allocation.

Related Theorist: David Rubenstein and the Professionalisation of Private Equity Management

David Rubenstein, co-founder of The Carlyle Group, represents a pivotal figure in establishing the modern GP model and professionalising private equity fund management. Born in 1949, Rubenstein's career trajectory exemplifies the evolution of the GP role from informal partnership structures to sophisticated institutional fund management.

Rubenstein's relationship with the GP concept is foundational. In 1987, he co-founded The Carlyle Group with William E. Conway Jr. and Daniel A. D'Aniello, establishing one of the world's largest private equity firms. Carlyle's success demonstrated that GPs could operate at institutional scale, managing billions of pounds in capital whilst maintaining rigorous investment discipline and fiduciary standards. Rubenstein's approach formalised many practices now standard in GP operations: systematic deal sourcing, rigorous due diligence, active portfolio management, and structured exit strategies.

Throughout his career, Rubenstein championed the principle that GPs must have substantial "skin in the game"-investing their own capital alongside LPs to align incentives and demonstrate conviction in their investment theses. This philosophy became a cornerstone of modern GP practice and helped establish trust with institutional investors such as pension funds and endowments.

Rubenstein's influence extended beyond Carlyle's operations. He became a vocal advocate for transparency in GP-LP relationships, emphasising the importance of clear communication, regular reporting, and adherence to fiduciary duties. His thought leadership helped establish best practices in Limited Partnership Agreements and governance structures that protect LP interests whilst enabling GPs to operate effectively.

Beyond private equity, Rubenstein's broader contributions to finance and philanthropy-including his role as Deputy Chairman of the Council on Foreign Relations and his substantial philanthropic initiatives-reflected his belief that GPs have responsibilities extending beyond pure financial returns. This perspective influenced how modern GPs conceptualise their role as stewards of capital and contributors to broader economic and social objectives.

Rubenstein's legacy demonstrates that the GP role, whilst fundamentally about managing capital and generating returns, is inseparable from questions of governance, ethics, and institutional credibility. His career illustrates how individual GPs and their firms shape the evolution of private equity structures and practices.

References

1. https://www.bunch.capital/private-markets-glossary/gp-general-partner-deal-lead

2. https://carta.com/learn/private-funds/structures/general-partner/

3. https://www.creatrust.com/investment-funds/gp-accounting-in-private-equity-funds

4. https://www.uspec.org/blog/gp-vs-lp-private-equity-roles-in-fund-management

5. https://www.roundtable.eu/learn/whats-the-difference-between-a-general-partner-and-a-limited-partner

6. https://flowinc.com/general-partner-gp.html

7. https://www.angellist.com/learn/general-partner

8. https://www.asimplemodel.com/insights/private-equity-fund-structure-gp-and-management-company

"A general partner (GP) is the entity responsible for managing a private equity fund, making investment decisions, overseeing portfolio companies, and executing the fund’s value-creation strategy." - Term: General partner (GP)

‌

‌

Quote: Professor Ethan Mollick - Wharton

"How do we mitigate those negative risks? I think there's a nitty-gritty path between here and some imagined future. We don't know if AI is going to get there-to super powerful and autonomous-but we do know it's disruptive today." - Professor Ethan Mollick - Wharton

In a candid conversation hosted by Scott Galloway on his Prof G podcast, Professor Ethan Mollick addresses the pressing challenge of managing artificial intelligence's immediate disruptions while navigating uncertainties about its long-term trajectory. Speaking from his vantage point at the Wharton School of the University of Pennsylvania, where he serves as an Associate Professor of Management and Co-Director of the Generative AI Labs, Mollick emphasises a grounded approach: focusing on today's realities rather than speculative dystopias or utopias.1,2,4

Who is Ethan Mollick?

Ethan Mollick is a leading voice in the intersection of technology, innovation, and organisational behaviour. His work at Wharton explores how emerging technologies reshape work, creativity, and decision-making. Mollick's bestselling book, Co-Intelligence: Living and Working with AI, distils years of research into practical principles for integrating AI as a collaborative 'alien co-intelligence'. He advocates inviting AI to brainstorming sessions, treating it like a person with defined roles, and assuming current models represent the 'worst AI you will ever use'-a principle underscoring relentless improvement ahead.1

Mollick's insights draw from empirical studies showing AI boosting productivity by 20-80% across tasks, far surpassing historical technologies like steam power. He warns of AI's opaque capabilities-no one fully understands why token-prediction systems yield extraordinary results-and forecasts 'agentic AI' in 2026: semi-autonomous systems handling complex goals with minimal oversight.1,2,4 Recent predictions highlight surging adoption, with a billion weekly users and organisations embedding AI deeply into processes, demanding guardrails for safety in psychological, legal, and medical consultations.4,5

Context of the Quote

The quote emerges from a February 2026 discussion on why CEOs often misjudge AI, mistaking it for narrow tools rather than transformative forces. Galloway, a serial entrepreneur and NYU Stern professor, probes Mollick on risks amid rapid progress. Mollick counters hype around superintelligent 'Machine Gods' by stressing AI's current disruption: even halting development now would yield a decade of upheaval in jobs, privacy, and security. He calls for 'nitty-gritty' strategies-practical steps like skill bundling (combining emotional intelligence, judgement, creativity, and expertise) to outpace automation-and organisational rethinking, including shorter work weeks or universal basic income in high-growth scenarios.1,3,5

This reflects Mollick's four future scenarios from Co-Intelligence: 'As Good As It Gets' (plateau), 'Slow Growth' (manageable integration), 'Exponential Growth' (severe, unpredictable risks with AI self-improving), and 'The Machine God' (autonomous superintelligence). He urges focus on the path 'between here and some imagined future', prioritising today's agentic shifts and ethical guardrails over remote singularities.1

Leading Theorists on AI Disruption and Risks

Mollick's views build on foundational thinkers who shaped AI risk discourse:

  • Nick Bostrom (Oxford Future of Humanity Institute): In Superintelligence (2014), Bostrom warns of existential risks from misaligned superintelligent AI pursuing goals orthogonally to humanity's. His 'control problem'-ensuring AI obedience-influences Mollick's guardrail emphasis.1
  • Stuart Russell (UC Berkeley): Co-author of Artificial Intelligence: A Modern Approach, Russell advocates 'provably beneficial AI' via uncertainty about human preferences. His book Human Compatible (2019) stresses inverse reinforcement learning, aligning with Mollick's human-in-the-loop principle.1
  • Ray Kurzweil: Google's Director of Engineering predicts the Singularity by 2045-AI surpassing human intelligence via exponential growth. Kurzweil's law of accelerating returns informs Mollick's exponential scenarios, though Mollick tempers optimism with pragmatic disruption focus.1
  • Timnit Gebru and Margaret Mitchell: Pioneers in AI ethics, their work on bias and safety (e.g., Stochastic Parrots paper) underscores immediate risks like misinformation, echoing Mollick's calls for ethical AI interactions.4

These theorists highlight a spectrum: from alignment challenges (Bostrom, Russell) to accelerationism (Kurzweil) and equity concerns (Gebru). Mollick synthesises them into actionable advice, bridging theory and practice for leaders facing 2026's agentic wave.1,2,3,4

References

1. https://gaiinsights.substack.com/p/32-quotes-from-ethan-mollicks-new

2. https://studio.hotelnewsresource.com/video/whartons-ethan-mollick-agentic-ai-will-rise-in-2026/

3. https://economictimes.com/magazines/panache/you-can-still-outpace-ai-wharton-professor-reveals-a-skill-bundling-strategy-to-safeguard-your-future-from-automation/articleshow/122920934.cms

4. https://knowledge.wharton.upenn.edu/podcast/this-week-in-business/where-artificial-intelligence-stands-heading-into-2026/

5. https://www.youtube.com/watch?v=67vauT7p0dU

6. https://qstar.ai/looking-ahead-to-ai-in-2026-a-tale-of-two-corporations/

7. https://www.oneusefulthing.org/p/signs-and-portents

8. https://thecontractnetwork.com/what-every-clinical-operations-leader-should-know-about-ai-going-into-2026/

9. https://www.oneusefulthing.org/p/four-singularities-for-research

"How do we mitigate those negative risks? I think there’s a nitty-gritty path between here and some imagined future. We don’t know if AI is going to get there—to super powerful and autonomous—but we do know it’s disruptive today." - Quote: Professor Ethan Mollick - Wharton

‌

‌

Quote: Jensen Huang - Nvidia CEO

"I'll take every possible opportunity, external information, new insights, new discoveries, new engineering ... I'll take those opportunities and I'll use it to shape everybody else's belief system. And I'm doing that literally every single day." - Jensen Huang - Nvidia CEO

Jensen Huang, co-founder and CEO of NVIDIA, shared insights on his leadership approach during Lex Fridman Podcast #494 (March 23, 2026), emphasizing how he leverages continuous learning to shape organizational beliefs.

This statement reflects Huang's strategic approach to leadership at NVIDIA, the world's most valuable company and primary engine powering the AI computing revolution. According to the podcast discussion, Huang emphasizes the importance of shaping the beliefs of employees, partners, and the broader industry through continuous engagement with emerging innovations and discoveries.

The quote underscores a deliberate leadership philosophy where Huang actively translates external developments-whether technological breakthroughs, market insights, or engineering advances-into organizational culture and strategic direction. This approach aligns with NVIDIA's evolution into what Huang describes as an "AI factory," requiring extreme co-design across GPU, CPU, memory, networking, and software systems.

Huang's emphasis on daily belief-shaping reflects his broader vision for anticipating future AI innovations, including agentic systems and open-source models, while maintaining organizational alignment around these forward-looking priorities.

References

1. https://www.youtube.com/watch?v=vif8NQcjVf0

2. https://lexfridman.com/jensen-huang/

3. https://lexfridman.com/jensen-huang-transcript/

4. https://www.youtube.com/live/vif8NQcjVf0

5. https://www.youtube.com/watch?v=2bpc5iGl0po

6. https://podwise.ai/dashboard/episodes/7581014

7. https://open.spotify.com/episode/0BGcaYvcDPkvBzFmkRI5uY

"I'll take every possible opportunity, external information, new insights, new discoveries, new engineering ... I'll take those opportunities and I'll use it to shape everybody else's belief system. And I'm doing that literally every single day." - Quote: Jensen Huang - Nvidia CEO

‌

‌
Share this on FacebookShare this on LinkedinShare this on YoutubeShare this on InstagramShare this on TwitterWhatsapp
You have received this email because you have subscribed to Global Advisors | Quantified Strategy Consulting as . If you no longer wish to receive emails please unsubscribe.
webversion - unsubscribe - update profile
© 2026 Global Advisors | Quantified Strategy Consulting, All rights reserved.
‌
‌