“How do we mitigate those negative risks? I think there’s a nitty-gritty path between here and some imagined future. We don’t know if AI is going to get there-to super powerful and autonomous-but we do know it’s disruptive today.” – Professor Ethan Mollick – Wharton
In a candid conversation hosted by Scott Galloway on his Prof G podcast, Professor Ethan Mollick addresses the pressing challenge of managing artificial intelligence’s immediate disruptions while navigating uncertainties about its long-term trajectory. Speaking from his vantage point at the Wharton School of the University of Pennsylvania, where he serves as an Associate Professor of Management and Co-Director of the Generative AI Labs, Mollick emphasises a grounded approach: focusing on today’s realities rather than speculative dystopias or utopias.1,2,4
Who is Ethan Mollick?
Ethan Mollick is a leading voice in the intersection of technology, innovation, and organisational behaviour. His work at Wharton explores how emerging technologies reshape work, creativity, and decision-making. Mollick’s bestselling book, Co-Intelligence: Living and Working with AI, distils years of research into practical principles for integrating AI as a collaborative ‘alien co-intelligence’. He advocates inviting AI to brainstorming sessions, treating it like a person with defined roles, and assuming current models represent the ‘worst AI you will ever use’-a principle underscoring relentless improvement ahead.1
Mollick’s insights draw from empirical studies showing AI boosting productivity by 20-80% across tasks, far surpassing historical technologies like steam power. He warns of AI’s opaque capabilities-no one fully understands why token-prediction systems yield extraordinary results-and forecasts ‘agentic AI’ in 2026: semi-autonomous systems handling complex goals with minimal oversight.1,2,4 Recent predictions highlight surging adoption, with a billion weekly users and organisations embedding AI deeply into processes, demanding guardrails for safety in psychological, legal, and medical consultations.4,5
Context of the Quote
The quote emerges from a February 2026 discussion on why CEOs often misjudge AI, mistaking it for narrow tools rather than transformative forces. Galloway, a serial entrepreneur and NYU Stern professor, probes Mollick on risks amid rapid progress. Mollick counters hype around superintelligent ‘Machine Gods’ by stressing AI’s current disruption: even halting development now would yield a decade of upheaval in jobs, privacy, and security. He calls for ‘nitty-gritty’ strategies-practical steps like skill bundling (combining emotional intelligence, judgement, creativity, and expertise) to outpace automation-and organisational rethinking, including shorter work weeks or universal basic income in high-growth scenarios.1,3,5
This reflects Mollick’s four future scenarios from Co-Intelligence: ‘As Good As It Gets’ (plateau), ‘Slow Growth’ (manageable integration), ‘Exponential Growth’ (severe, unpredictable risks with AI self-improving), and ‘The Machine God’ (autonomous superintelligence). He urges focus on the path ‘between here and some imagined future’, prioritising today’s agentic shifts and ethical guardrails over remote singularities.1
Leading Theorists on AI Disruption and Risks
Mollick’s views build on foundational thinkers who shaped AI risk discourse:
- Nick Bostrom (Oxford Future of Humanity Institute): In Superintelligence (2014), Bostrom warns of existential risks from misaligned superintelligent AI pursuing goals orthogonally to humanity’s. His ‘control problem’-ensuring AI obedience-influences Mollick’s guardrail emphasis.1
- Stuart Russell (UC Berkeley): Co-author of Artificial Intelligence: A Modern Approach, Russell advocates ‘provably beneficial AI’ via uncertainty about human preferences. His book Human Compatible (2019) stresses inverse reinforcement learning, aligning with Mollick’s human-in-the-loop principle.1
- Ray Kurzweil: Google’s Director of Engineering predicts the Singularity by 2045-AI surpassing human intelligence via exponential growth. Kurzweil’s law of accelerating returns informs Mollick’s exponential scenarios, though Mollick tempers optimism with pragmatic disruption focus.1
- Timnit Gebru and Margaret Mitchell: Pioneers in AI ethics, their work on bias and safety (e.g., Stochastic Parrots paper) underscores immediate risks like misinformation, echoing Mollick’s calls for ethical AI interactions.4
These theorists highlight a spectrum: from alignment challenges (Bostrom, Russell) to accelerationism (Kurzweil) and equity concerns (Gebru). Mollick synthesises them into actionable advice, bridging theory and practice for leaders facing 2026’s agentic wave.1,2,3,4
References
1. https://gaiinsights.substack.com/p/32-quotes-from-ethan-mollicks-new
2. https://studio.hotelnewsresource.com/video/whartons-ethan-mollick-agentic-ai-will-rise-in-2026/
5. https://www.youtube.com/watch?v=67vauT7p0dU
6. https://qstar.ai/looking-ahead-to-ai-in-2026-a-tale-of-two-corporations/
7. https://www.oneusefulthing.org/p/signs-and-portents
9. https://www.oneusefulthing.org/p/four-singularities-for-research

