Select Page

Global Advisors | Quantified Strategy Consulting

AI
Quote: David Solomon – Goldman Sachs CEO

Quote: David Solomon – Goldman Sachs CEO

“If the firm grows and you expand and you can invest in other areas for growth, we’ll wind up with more jobs… we have at every step along the journey for the last forty years as technology has made us more productive. I don’t think it’s different this time [with AI].” – David Solomon – Goldman Sachs CEO

David Michael Solomon, born in 1962 in Hartsdale, New York, is an American investment banker and DJ, currently serving as the CEO and Chairman of Goldman Sachs. His journey into the financial sector began after he graduated with a BA in political science from Hamilton College. Initially, Solomon worked at Irving Trust Company and Drexel Burnham before joining Bear Stearns. In 1999, he moved to Goldman Sachs as a partner and became co-head of the High Yield and Leveraged Loan Business.

Solomon’s rise within Goldman Sachs was swift and strategic. He became the co-head of the Investment Banking Division in 2006 and held this role for a decade. In 2017, he was appointed President and Chief Operating Officer, and by October 2018, he succeeded Lloyd Blankfein as CEO. He became Chairman in January 2019.

Beyond his financial career, Solomon is known for his passion for music, producing electronic dance music under the alias “DJ D-Sol”. He has performed at various venues, including nightclubs and music festivals in New York, Miami, and The Bahamas.

Context of the Quote

The quote highlights Solomon’s perspective on technology and job creation in the financial sector. He suggests that while technology, particularly AI, can enhance productivity and potentially lead to job reductions in certain areas, the overall growth of the firm will create more opportunities for employment. This view is rooted in his experience observing how technological advancements have historically led to increased productivity and growth for Goldman Sachs.

Leading Theorists on AI and Employment

Several leading theorists have explored the impact of AI on employment, with divergent views:

  • Joseph Schumpeter is famous for his theory of “creative destruction,” which suggests that technological innovations often lead to the destruction of existing jobs but also create new ones. This cycle is seen as essential for economic growth and innovation.

  • Klaus Schwab, founder of the World Economic Forum, has discussed the Fourth Industrial Revolution, emphasizing how AI and automation will transform industries. However, he also highlights the potential for new job creation in emerging sectors.

  • Economists Erik Brynjolfsson and Andrew McAfee have written extensively on how technology can lead to both job displacement and creation. They argue that while AI may reduce certain types of jobs, it also fosters economic growth and new opportunities.

These theorists provide a backdrop for understanding Solomon’s optimistic view on AI’s impact on employment, focusing on the potential for growth and innovation to offset job losses.

Conclusion

David Solomon’s quote encapsulates his optimism about the interplay between technology and job creation. Focusing on the strategic growth of Goldman Sachs, he believes that technological advancements will enhance productivity and create opportunities for expansion, ultimately leading to more employment opportunities. This perspective aligns with broader discussions among economists and theorists on the transformative role of AI in the workplace.

read more
Quote: David Solomon – Goldman Sachs CEO

Quote: David Solomon – Goldman Sachs CEO

“Markets run in cycles, and whenever we’ve historically had a significant acceleration in a new technology that creates a lot of capital formation and therefore lots of interesting new companies around it, you generally see the market run ahead of the potential. Are there going to be winners and losers? There are going to be winners and losers.” – David Solomon – Goldman Sachs CEO

The quote, “Markets run in cycles, and whenever we’ve historically had a significant acceleration in a new technology that creates a lot of capital formation and therefore lots of interesting new companies around it, you generally see the market run ahead of the potential. Are there going to be winners and losers? There are going to be winners and losers,” comes from a public discussion with David Solomon, CEO of Goldman Sachs, during Italian Tech Week in October 2025. This statement was made in the context of a wide-ranging interview that addressed the state of the US and global economy, the impact of fiscal stimulus and technology infrastructure spending, and, critically, the current investment climate surrounding artificial intelligence (AI) and other emergent technologies.

Solomon’s comments were prompted by questions around the record-breaking rallies in US and global equity markets and specifically the extraordinary market capitalisations reached by leading tech firms. He highlighted the familiar historical pattern: periods of market exuberance often occur when new technologies spur rapid capital formation, leading to the emergence of numerous new companies around a transformative theme. Solomon drew parallels with the Dot-com boom to underscore the cyclical nature of markets and to remind investors that dramatic phases of growth inevitably produce both outsized winners and significant casualties.

His insight reflects a seasoned banker’s view, grounded in empirical observation: while technological waves can drive periods of remarkable wealth creation and productivity gains, they also tend to attract speculative excesses. Market valuations in these periods often disconnect from underlying fundamentals, setting the stage for later corrections. The resulting market shake-outs separate enduring companies from those that fail to deliver sustainable value.

About David Solomon

David M. Solomon is one of the most prominent figures in global finance, serving as the CEO and Chairman of Goldman Sachs since 2018. Raised in New York and a graduate of Hamilton College, Solomon has built his reputation over four decades in banking—rising through leadership positions at Irving Trust, Drexel Burnham, and Bear Stearns before joining Goldman Sachs in 1999 as a partner. He subsequently became global head of the Financing Group, then co-head of the Investment Banking Division, playing a central role in shaping the firm’s capital markets strategy.

Solomon is known for his advocacy of organisational modernisation and culture change at Goldman Sachs—prioritising employee well-being, increasing agility, and investing heavily in technology. He combines traditional deal-making acumen with an openness to digital transformation. Beyond banking, Solomon has a notable side-career as a DJ under the name DJ D-Sol, performing electronic dance music at high-profile venues.

Solomon’s career reflects both the conservatism and innovative ambition associated with modern Wall Street leadership: an ability to see risk cycles clearly, and a willingness to pivot business models to suit shifts in technological and regulatory environments. His net worth in 2025 is estimated between $85 million and $200 million, owing to decades of compensation, equity, and investment performance.

Theoretical Foundations: Cycles, Disruptive Innovation, and Market Dynamics

Solomon’s perspective draws implicitly on a lineage of economic theory and market analysis concerning cycles of innovation, capital formation, and asset bubbles. Leading theorists and their contributions include:

  • Joseph Schumpeter: Schumpeter’s theory of creative destruction posited that economic progress is driven by cycles of innovation, where new technologies disrupt existing industries, create new market leaders, and ultimately cause the obsolescence or failure of firms unable to adapt. Schumpeter emphasised how innovation clusters drive periods of rapid growth, investment surges, and, frequently, speculative excess.

  • Carlota Perez: In Technological Revolutions and Financial Capital (2002), Perez advanced a model of techno-economic paradigms, proposing that every major technological revolution (e.g., steam, electricity, information technology) proceeds through phases: an initial installation period—characterised by exuberant capital inflows, speculation, and bubble formation—followed by a recessionary correction, and, eventually, a deployment period, where productive uses of the technology diffuse more broadly, generating deep-seated economic gains and societal transformation. Perez’s work helps contextualise Solomon’s caution about markets running ahead of potential.

  • Charles Kindleberger and Hyman Minsky: Both scholars examined the dynamics of financial bubbles. Kindleberger, in Manias, Panics, and Crashes, and Minsky, through his Financial Instability Hypothesis, described how debt-fuelled euphoria and positive feedback loops of speculation can drive financial markets to overshoot the intrinsic value created by innovation, inevitably resulting in busts.

  • Clayton Christensen: Christensen’s concept of disruptive innovation explains how emergent technologies, initially undervalued by incumbents, can rapidly upend entire industries—creating new winners while displacing former market leaders. His framework helps clarify Solomon’s points about the unpredictability of which companies will ultimately capture value in the current AI wave.

  • Benoit Mandelbrot: Applying his fractal and complexity theory to financial markets, Mandelbrot challenged the notion of equilibrium and randomness in price movement, demonstrating that markets are prone to extreme events—outlier outcomes that, while improbable under standard models, are a recurrent feature of cyclical booms and busts.

Practical Relevance in Today’s Environment

The patterns stressed by Solomon, and their theoretical antecedents, are especially resonant given the current environment: massive capital allocations into AI, cloud infrastructure, and adjacent technologies—a context reminiscent of previous eras where transformative innovations led markets both to moments of extraordinary wealth creation and subsequent corrections. These cycles remain a central lens for investors and business leaders navigating this era of technological acceleration.

By referencing both history and the future, Solomon encapsulates the balance between optimism over the potential of new technology and clear-eyed vigilance about the risks endemic to all periods of market exuberance.

read more
Quote: David Solomon – Goldman Sachs CEO

Quote: David Solomon – Goldman Sachs CEO

“AI really allows smart, talented, driven, sophisticated people to be more productive – to touch more people, have better information at their disposal, better analysis.” – David Solomon – Goldman Sachs CEO

David Solomon, CEO of Goldman Sachs, made the statement “AI really allows smart, talented, driven, sophisticated people to be more productive – to touch more people, have better information at their disposal, better analysis” during an interview at Italian Tech Week 2025, reflecting his conviction that artificial intelligence is redefining productivity and impact across professional services and finance.

David Solomon is one of the most influential figures in global finance, serving as Chairman and CEO of Goldman Sachs since 2018. Born in 1962 in Hartsdale, New York, Solomon’s early years were shaped by strong family values, a pursuit of education at Hamilton College, and a keen interest in sport and leadership. Solomon’s ascent in the industry began after stints at Irving Trust and Drexel Burnham, specialising early in commercial paper and junk bonds, then later at Bear Stearns where he played a central role in project financing. In 1999, he joined Goldman Sachs as a partner and quickly rose through the ranks—serving as Global Head of the Financing Group and later Co-Head of the Investment Banking Division for a decade.

His leadership is marked by an emphasis on modernisation, talent development, and integrating technology into the financial sector. Notably, Solomon has overseen increased investments in digital platforms and has reimagined work culture, including reducing working hours and implementing real-time performance review systems. Outside his professional life, Solomon is distinctively known for his passion for music, performing as “DJ D-Sol” at major electronic dance music venues, symbolising a leadership style that blends discipline with creative openness.

Solomon’s remarks on AI at Italian Tech Week are rooted in Goldman Sachs’ major investments in technology: with some 12,000 engineers and cutting-edge AI platforms, Solomon champions the view that technology not only streamlines operational efficiency but fundamentally redefines the reach and ability of talented professionals, providing richer data, deeper insights, and more effective analysis. He frames AI as part of a long continuum—from the days of microfiche and manual records to today’s instant, voice-powered analytics—positioning technology as both a productivity enabler and an engine for growth.

Leading Theorists and Context in AI Productivity

Solomon’s thinking sits at the crossroads of key theoretical advances in artificial intelligence and productivity economics. The transformation he describes draws extensively from foundational theorists and practitioners who have shaped our understanding of AI’s organisational impact:

  • Herbert Simon: A founder of artificial intelligence as a discipline, Simon’s concept of “bounded rationality” highlighted that real-world decision making could be fundamentally reshaped by computational power. Simon envisioned computers extending the limits of human cognition, a concept directly echoed in Solomon’s belief that AI produces leverage for talented professionals.

  • Erik Brynjolfsson: At MIT, Brynjolfsson has argued that AI is a “general purpose technology” like steam power or electricity, capable of diffusing productivity gains across every sector through automation, improved information processing, and new business models. His work clarifies that the impact of AI is not in replacing human value, but augmenting it, making people exponentially more productive.

  • Andrew Ng: As a pioneer in deep learning, Ng has emphasised the role of AI as a productivity tool: automating routine tasks, supporting complex analysis, and dramatically increasing the scale and speed at which decisions can be made. Ng’s teaching at Stanford and public writings focus on making AI accessible as a resource to boost human capability rather than a substitute.

  • Daron Acemoglu: The MIT economist challenges overly optimistic readings, arguing that the net benefits of AI depend on balanced deployment, policy, and organisational adaptation. Acemoglu frames the debate on whether AI will create or eliminate jobs, highlighting the strategic choices organisations must make—a theme Solomon directly addresses in his comments on headcount in banking.

  • Geoffrey Hinton: Widely known as “the godfather of deep learning,” Hinton’s research underpins the practical capabilities of AI systems—particularly in areas such as data analysis and decision support—that Solomon highlights as crucial to productive professional services.

 

Contemporary Application and Analysis

The productivity gains Solomon identifies are playing out across multiple sectors:

  • In financial services, AI-driven analytics enable deeper risk management, improved deal generation, and scalable client engagement.
  • In asset management and trading, platforms like Goldman Sachs’ own “Assistant” and generative coding tools (e.g., Cognition Labs’ Devin) allow faster, more nuanced analysis and automation.
  • The “power to touch more people” is realised through personalised client service, scalable advisory, and rapid market insight, bridging human expertise and computational capacity.

Solomon’s perspective resonates strongly with current debates on the future of work. While risks—such as AI investment bubbles, regulatory uncertainty, and workforce displacement—are acknowledged, Solomon positions AI as a strategic asset: not a threat to jobs, but a catalyst for organisational expansion and client impact, consistent with the lessons learned through previous technology cycles.

Theoretical Context Table

Theorist
Core Idea
Relevance to Solomon’s Statement
Herbert Simon
Bounded rationality, decision support
AI extending cognitive limits and enabling smarter analysis
Erik Brynjolfsson
AI as general purpose technology
Productivity gains and diffusion through diverse organisations
Andrew Ng
AI augments tasks, boosts human productivity
AI as a tool for scalable information and superior outcomes
Daron Acemoglu
Balance of job creation/destruction by technology
Strategic choices in deploying AI impact workforce and growth
Geoffrey Hinton
Deep learning, data analysis
Enabling advanced analytics and automation in financial services

Essential Insights

  • AI’s impact is cumulative and catalytic, empowering professionals to operate at far greater scale and depth than before, as illustrated by Solomon’s personal technological journey—from manual information gathering to instantaneous AI-driven analytics.
  • The quote’s context reflects the practical reality of AI at the world’s leading financial institutions, where technology spend rivals infrastructure, and human-machine synergy is central to strategy.
  • Leading theorists agree: real productivity gains depend on augmenting human capability, strategic deployment, and continual adaptation—principles explicitly recognised in Solomon’s operational philosophy and in global best practice.

read more
Quote: Jamie Dimon – JP Morgan Chase CEO

Quote: Jamie Dimon – JP Morgan Chase CEO

“Take the Internet bubble. Remember that blew up and I can name 100 companies that were worth $50 billion and disappeared…. So there will be some real big companies, real big success. [ AI ]will work in spite of the fact that not everyone invested is going to have a great investment return.” – Jamie Dimon, CEO JP Morgan Chase

Jamie Dimon’s observation about artificial intelligence investment echoes his experience witnessing the dot-com bubble’s collapse at the turn of the millennium—a period when he was navigating his own career transition from Citigroup to Bank One. Speaking to Bloomberg in London during October 2025, the JPMorgan Chase chairman drew upon decades of observing technological disruption to contextualise the extraordinary capital deployment currently reshaping the AI landscape. His commentary serves as a measured counterpoint to the euphoria surrounding generative artificial intelligence, reminding investors that transformative technologies invariably produce both spectacular winners and catastrophic losses.

The Speaker: Institutional Banking’s Preeminent Figure

Jamie Dimon has commanded JPMorgan Chase since 2006, transforming it into America’s largest bank by assets whilst establishing himself as Wall Street’s most influential voice. His journey to this position began in 1982 when he joined American Express as an assistant to Sandy Weill, embarking upon what would become one of the most consequential partnerships in American finance. For sixteen years, Dimon and Weill orchestrated a series of acquisitions that built Travelers Group into a financial services colossus, culminating in the 1998 merger with Citicorp to form Citigroup.

The relationship ended abruptly that same year when Weill asked Dimon to resign—a decision Weill later characterised as regrettable to The New York Times. The ouster proved fortuitous. In 2000, Dimon assumed leadership of Bank One, a struggling Chicago-based institution he successfully revitalised. When JPMorgan acquired Bank One in 2004, Dimon became president and chief operating officer before ascending to chief executive two years later. Under his stewardship, JPMorgan’s stock value has tripled, and in 2023 the bank recorded the largest annual profit in US banking history at nearly $50 billion.

Dimon’s leadership during the 2008 financial crisis distinguished him amongst his peers. Whilst competitors collapsed or required government rescue, JPMorgan emerged strengthened, acquiring Bear Stearns and Washington Mutual. He reprised this role during the 2023 regional banking crisis, coordinating an industry response that saw eleven major banks contribute $30 billion to stabilise First Republic Bank. This pattern of crisis management has positioned him as what analyst Mike Mayo termed “a senior statesperson” for the financial industry.

Beyond banking, Dimon maintains substantial political engagement. Having donated over $500,000 to Democratic candidates between 1989 and 2009, he has since adopted a more centrist posture, famously declaring to CNBC in 2019 that “my heart is Democratic, but my brain is kind of Republican”. He served briefly on President Trump’s business advisory council in 2017 and has repeatedly faced speculation about presidential ambitions, confirming in 2016 he would “love to be president” whilst acknowledging the practical obstacles. In 2024, he endorsed Nikki Haley in the Republican primary before speaking positively about Trump following Haley’s defeat.

The Technological Context: AI’s Investment Frenzy

Dimon’s October 2025 remarks addressed the extraordinary capital deployment underway in artificial intelligence infrastructure. His observation that approximately $1 trillion in AI-related spending was occurring “this year” encompasses investments by hyperscalers—the massive cloud computing providers—alongside venture capital flowing to companies like OpenAI, which despite substantial losses continues attracting vast sums. This investment boom has propelled equity markets into their third consecutive year of bull-market conditions, with asset prices reaching elevated levels and credit spreads compressing to historical lows.

At JPMorgan itself, Dimon revealed the bank has maintained systematic AI investment since 2012, allocating $2 billion annually and employing 2,000 specialists dedicated to the technology. The applications span risk management, fraud detection, marketing, customer service, and software development, with approximately 150,000 employees weekly utilising the bank’s internal generative AI tools. Crucially, Dimon reported achieving rough parity between the $2 billion expenditure and measurable benefits—a ratio he characterised as “the tip of the iceberg” given improvements in service quality that resist quantification.

His assessment that AI “will affect jobs” reflects the technology’s capacity to eliminate certain roles whilst enhancing others, though he expressed confidence that successful deployment would generate net employment growth at JPMorgan through retraining and redeployment programmes. This pragmatic stance—neither utopian nor dystopian—typifies Dimon’s approach to technological change: acknowledge disruption candidly whilst emphasising adaptive capacity.

The Dot-Com Parallel: Lessons from Previous Technological Euphoria

Dimon’s reference to the Internet bubble carries particular resonance given his vantage point during that era. In 1998, whilst serving as Citigroup’s president, he witnessed the NASDAQ’s ascent to unsustainable valuations before the March 2000 collapse obliterated trillions in market capitalisation. His claim that he could “name 100 companies that were worth $50 billion and disappeared” speaks to the comprehensive destruction of capital that accompanied the bubble’s deflation. Companies such as Pets.com, Webvan, and eToys became cautionary tales—businesses predicated upon sound concepts executed prematurely or inefficiently, consuming vast investor capital before failing entirely.

Yet from this wreckage emerged the digital economy’s defining enterprises. Google, incorporated in 1998, survived the downturn to become the internet’s primary gateway. Facebook, founded in 2004, built upon infrastructure and lessons from earlier social networking failures. YouTube, established in 2005, capitalised on broadband penetration that earlier video platforms lacked. Dimon’s point—that “there will be some real big companies, real big success” emerging from AI investment despite numerous failures—suggests that capital deployment exceeding economically optimal levels nonetheless catalyses innovation producing enduring value.

This perspective aligns with economic theories recognising that technological revolutions characteristically involve overshoot. The railway boom of the 1840s produced excessive track mileage and widespread bankruptcies, yet established transportation infrastructure enabling subsequent industrialisation. The telecommunications bubble of the late 1990s resulted in overbuilt fibre-optic networks, but this “dark fibre” later supported broadband internet at marginal cost. Dimon’s observation that technological transitions prove “productive” in aggregate “in spite of the fact that not everyone invested is going to have a great investment return” captures this dynamic: society benefits from infrastructure investment even when investors suffer losses.

Schumpeterian Creative Destruction and Technological Transition

Joseph Schumpeter’s concept of creative destruction provides theoretical foundation for understanding the pattern Dimon describes. Writing in Capitalism, Socialism and Democracy (1942), Schumpeter argued that capitalism’s essential characteristic involves “the process of industrial mutation that incessantly revolutionises the economic structure from within, incessantly destroying the old one, incessantly creating a new one.” This process necessarily produces winners and losers—incumbent firms clinging to obsolete business models face displacement by innovators exploiting new technological possibilities.

Schumpeter emphasised that monopolistic competition amongst innovators drives this process, with entrepreneurs pursuing temporary monopoly rents through novel products or processes. The expectation of extraordinary returns attracts excessive capital during technology booms, funding experiments that collectively advance knowledge even when individual ventures fail. This mechanism explains why bubbles, whilst financially destructive, accelerate technological diffusion: the availability of capital enables rapid parallel experimentation impossible under conservative financing regimes.

Clayton Christensen’s theory of disruptive innovation, elaborated in The Innovator’s Dilemma (1997), complements Schumpeter’s framework by explaining why established firms struggle during technological transitions. Christensen observed that incumbent organisations optimise for existing customer needs and established value networks, rendering them structurally incapable of pursuing initially inferior technologies serving different markets. Entrants unburdened by legacy systems and customer relationships therefore capture disruptive innovations’ benefits, whilst incumbents experience declining relevance.

Dimon’s acknowledgement that “there will be jobs that are eliminated” whilst predicting net employment growth at JPMorgan reflects these dynamics. Artificial intelligence constitutes precisely the type of general-purpose technology that Christensen’s framework suggests will restructure work organisation. Routine tasks amenable to codification face automation, requiring workforce adaptation through “retraining and redeployment”—the organisational response Dimon describes JPMorgan implementing.

Investment Cycles and Carlota Pérez’s Technological Surges

Carlota Pérez’s analysis in Technological Revolutions and Financial Capital (2002) offers sophisticated understanding of the boom-bust patterns characterising technological transitions. Pérez identifies a consistent sequence: technological revolutions begin with an “irruption” phase as entrepreneurs exploit new possibilities, followed by a “frenzy” phase when financial capital floods in, creating asset bubbles disconnected from productive capacity. Inevitable crash precipitates a “synergy” phase when surviving innovations diffuse broadly, enabling a “maturity” phase of stable growth until the next technological revolution emerges.

The dot-com bubble exemplified Pérez’s frenzy phase—capital allocated indiscriminately to internet ventures regardless of business fundamentals, producing the NASDAQ’s March 2000 peak before three years of decline. The subsequent synergy phase saw survivors like Amazon and Google achieve dominance whilst countless failures disappeared. Dimon’s reference to “100 companies that were worth $50 billion and disappeared” captures the frenzy phase’s characteristic excess, whilst his citation of “Facebook, YouTube, Google” represents the synergy phase’s enduring value creation.

Applying Pérez’s framework to artificial intelligence suggests current investment levels—the $1 trillion deployment Dimon referenced—may indicate the frenzy phase’s advanced stages. Elevated asset prices, compressed credit spreads, and widespread investor enthusiasm traditionally precede corrections enabling subsequent consolidation. Dimon’s observation that he remains “a long-term optimist” whilst cautioning that “asset prices are high” reflects precisely the ambivalence appropriate during technological transitions’ financial euphoria: confidence in transformative potential tempered by recognition of valuation excess.

Hyman Minsky’s Financial Instability Hypothesis

Hyman Minsky’s financial instability hypothesis, developed throughout the 1960s and 1970s, explains the endogenous generation of financial fragility during stable periods. Minsky identified three financing postures: hedge finance, where cash flows cover debt obligations; speculative finance, where near-term cash flows cover interest but not principal, requiring refinancing; and Ponzi finance, where cash flows prove insufficient even for interest, necessitating asset sales or further borrowing to service debt.

Economic stability encourages migration from hedge toward speculative and ultimately Ponzi finance as actors’ confidence increases. During technological booms, this migration accelerates—investors fund ventures lacking near-term profitability based upon anticipated future cash flows. The dot-com era witnessed classic Ponzi dynamics: companies burning capital quarterly whilst promising eventual dominance justified continued financing. When sentiment shifted, refinancing evaporated, triggering cascading failures.

Dimon’s comment that “not everyone invested is going to have a great investment return” implicitly acknowledges Minskian dynamics. The $1 trillion flowing into AI infrastructure includes substantial speculative and likely Ponzi finance—investments predicated upon anticipated rather than demonstrated cash flows. OpenAI’s losses despite massive valuation exemplify this pattern. Yet Minsky recognised that such dynamics, whilst generating financial instability, also fund innovation exceeding levels conservative finance would support. Society gains from experiments capital discipline would preclude.

Network Effects and Winner-Take-All Dynamics

The persistence of “real big companies, real big success” emerging from technological bubbles reflects network effects characteristic of digital platforms. Economist W. Brian Arthur’s work on increasing returns demonstrated that technologies exhibiting positive feedback—where adoption by some users increases value for others—tend toward monopolistic market structures. Each additional Facebook user enhances the platform’s value to existing users, creating barriers to competitor entry that solidify dominance.

Carl Shapiro and Hal Varian’s Information Rules (1998) systematically analysed information goods’ economics, emphasising that near-zero marginal costs combined with network effects produce natural monopolies in digital markets. This explains why Google commands search, Amazon dominates e-commerce, and Facebook controls social networking despite numerous well-funded competitors emerging during the dot-com boom. Superior execution combined with network effects enabled these firms to achieve sustainable competitive advantage.

Artificial intelligence exhibits similar dynamics. Training large language models requires enormous capital and computational resources, but deploying trained models incurs minimal marginal cost. Firms achieving superior performance attract users whose interactions generate data enabling further improvement—a virtuous cycle competitors struggle to match. Dimon’s prediction of “some real big companies, real big success” suggests he anticipates winner-take-all outcomes wherein a handful of AI leaders capture disproportionate value whilst numerous competitors fail.

Public Policy Implications: Industrial Policy and National Security

During the Bloomberg interview, Dimon addressed the Trump administration’s emerging industrial policy, particularly regarding strategic industries like rare earth minerals and semiconductor manufacturing. His endorsement of government support for MP Materials—a rare earth processor—reveals pragmatic acceptance that national security considerations sometimes warrant departure from pure market principles. This stance reflects growing recognition that adversarial competition with China necessitates maintaining domestic production capacity in strategically critical sectors.

Dani Rodrik’s work on industrial policy emphasises that whilst governments possess poor records selecting specific winners, they can effectively support broad technological capabilities through coordinated investment in infrastructure, research, and human capital. Mariana Mazzucato’s The Entrepreneurial State (2013) documents government’s crucial role funding high-risk innovation underlying commercial technologies—the internet, GPS, touchscreens, and voice recognition all emerged from public research before private commercialisation.

Dimon’s caution that industrial policy must “come with permitting” and avoid “virtue signalling” reflects legitimate concerns about implementation quality. Subsidising industries whilst maintaining regulatory barriers preventing their operation achieves nothing—a pattern frustrating American efforts to onshore manufacturing. His emphasis on “long-term purchase agreements” as perhaps “the most important thing” recognises that guaranteed demand reduces risk more effectively than capital subsidies, enabling private investment that government funding alone cannot catalyse.

Market Conditions and Forward-Looking Concerns

Dimon’s October 2025 assessment of macroeconomic conditions combined optimism about continued expansion with caution regarding inflation risks. His observation that “consumers are still okay” because of employment—”jobs, jobs, jobs”—identifies the crucial variable determining economic trajectory. Consumer spending constitutes approximately 70% of US GDP; sustained employment supports spending even as other indicators suggest vulnerability.

Yet his expression of being “a little more nervous about inflation not coming down like people expect” challenges consensus forecasts anticipating Federal Reserve interest rate cuts totalling 100 basis points over the subsequent twelve months. Government spending—which Dimon characterised as “inflationary”—combined with potential supply-side disruptions from tariffs could reverse disinflationary trends. Should inflation prove stickier than anticipated, the Fed would face constraints limiting monetary accommodation, potentially triggering the 2026 recession Dimon acknowledged “could happen.”

This assessment demonstrates Dimon’s characteristic refusal to offer false certainty. His acknowledgement that forecasts “have almost always been wrong, and the Fed’s been wrong too” reflects epistemic humility appropriate given macroeconomic forecasting’s poor track record. Rather than pretending precision, he emphasises preparedness: “I hope for the best, plan for the worst.” This philosophy explains JPMorgan’s consistent outperformance—maintaining sufficient capital and liquidity to withstand adverse scenarios whilst remaining positioned to exploit opportunities competitors’ distress creates.

Leadership Philosophy and Organisational Adaptation

The interview revealed Dimon’s approach to deploying artificial intelligence throughout JPMorgan’s operations. His emphasis that “every time we meet as a business, we ask, what are you doing that we could do to serve your people?” reflects systematic organisational learning rather than top-down technology imposition. This methodology—engaging managers to identify improvement opportunities rather than mandating specific implementations—enables bottom-up innovation whilst maintaining strategic coherence.

Dimon’s observation that “as managers learn how to do it, they’re asking more questions” captures the iterative process through which organisations absorb disruptive technologies. Initial deployments generate understanding enabling more sophisticated applications, creating momentum as possibilities become apparent. The statistic that 150,000 employees weekly utilise JPMorgan’s internal AI tools suggests successful cultural embedding—technology adoption driven by perceived utility rather than compliance.

This approach contrasts with common patterns wherein organisations acquire technology without changing work practices, yielding disappointing returns. Dimon’s insistence on quantifying benefits—”we have about $2 billion of benefit” matching the $2 billion expenditure—enforces accountability whilst acknowledging that some improvements resist measurement. The admission that quantifying “improved service” proves difficult “but we know” it occurs reflects sophisticated understanding that financial metrics capture only partial value.

Conclusion: Technological Optimism Tempered by Financial Realism

Jamie Dimon’s commentary on artificial intelligence investment synthesises his extensive experience navigating technological and financial disruption. His parallel between current AI enthusiasm and the dot-com bubble serves not as dismissal but as realistic framing—transformative technologies invariably attract excessive capital, generating both spectacular failures and enduring value creation. The challenge involves maintaining strategic commitment whilst avoiding financial overextension, deploying technology systematically whilst preserving adaptability, and pursuing innovation whilst managing risk.

His perspective carries weight because it emerges from demonstrated judgement. Having survived the dot-com collapse, steered JPMorgan through the 2008 crisis, and maintained the bank’s technological competitiveness across two decades, Dimon possesses credibility competitors lack. When he predicts “some real big companies, real big success” whilst cautioning that “not everyone invested is going to have a great investment return,” the statement reflects neither pessimism nor hype but rather accumulated wisdom about how technological revolutions actually unfold—messily, expensively, destructively, and ultimately productively.

read more
Quote: Jamie Dimon – JP Morgan Chase CEO

Quote: Jamie Dimon – JP Morgan Chase CEO

“People shouldn’t put their head in the sand. [AI] is going to affect jobs. Think of every application, every service you do; you’ll be using .. AI – some to enhance it. Some of it will be you doing the same job; you’re doing a better job at it. There will be jobs that are eliminated, but you’re better off being way ahead of the curve.” – Jamie Dimon, CEO JP Morgan Chase

Jamie Dimon delivered these observations on artificial intelligence during an interview with Bloomberg’s Tom Mackenzie in London on 7 October 2025, where he discussed JPMorgan Chase’s decade-long engagement with AI technology and its implications for the financial services sector. His comments reflect both the pragmatic assessment of a chief executive who has committed substantial resources to technological transformation and the broader perspective of someone who has navigated multiple economic cycles throughout his career.

The Context of Dimon’s Statement

JPMorgan Chase has been investing in AI since 2012, well before the recent generative AI explosion captured public attention. The bank now employs 2,000 people dedicated to AI initiatives and spends $2 billion annually on these efforts. This investment has already generated approximately $2 billion in quantifiable benefits, with Dimon characterising this as merely “the tip of the iceberg.” The technology permeates every aspect of the bank’s operations—from risk management and fraud detection to marketing, idea generation and customer service.

What makes Dimon’s warning particularly salient is his acknowledgement that approximately 150,000 JPMorgan employees use the bank’s suite of AI tools weekly. This isn’t theoretical speculation about future disruption; it’s an ongoing transformation within one of the world’s largest financial institutions, with assets of $4.0 trillion. The bank’s approach combines deployment across business functions with what Dimon describes as a cultural shift—managers and leaders are now expected to ask continuously: “What are you doing that we could do to serve your people? Why can’t you do better? What is somebody else doing?”

Dimon’s perspective on job displacement is notably unsentimental whilst remaining constructive. He rejects the notion of ignoring AI’s impact, arguing that every application and service will incorporate the technology. Some roles will be enhanced, allowing employees to perform better; others will be eliminated entirely. His solution centres on anticipatory adaptation rather than reactive crisis management—JPMorgan has established programmes for retraining and redeploying staff. For the bank itself, Dimon envisions more jobs overall if the institution succeeds, though certain functions will inevitably contract.

His historical framing of technological disruption provides important context. Drawing parallels to the internet bubble, Dimon noted that whilst hundreds of companies worth billions collapsed, the period ultimately produced Facebook, YouTube and Google. He applies similar logic to current AI infrastructure spending, which is approaching $1 trillion annually across the sector. There will be “a lot of losers, a lot of winners,” but the aggregate effect will prove productive for the economy.

Jamie Dimon: A Biography

Jamie Dimon has served as Chairman and Chief Executive Officer of JPMorgan Chase since 2006, presiding over its emergence as the leading US bank by domestic assets under management, market capitalisation and publicly traded stock value. Born on 13 March 1956, Dimon’s ascent through American finance has been marked by both remarkable achievements and notable setbacks, culminating in a position where he is widely regarded as the dominant banking executive of his generation.

Dimon earned his bachelor’s degree from Tufts University in 1978 before completing an MBA at Harvard Business School in 1982. His career began with a brief stint as a management consultant at Boston Consulting Group, followed by his entry into American Express, where he worked under the mentorship of Sandy Weill—a relationship that would prove formative. At the age of 30, Dimon was appointed chief financial officer of Commercial Credit, later becoming the firm’s president. This role placed him at the centre of an aggressive acquisition strategy that included purchasing Primerica Corporation in 1987 and The Travelers Corporation in 1993.

From 1990 to 1998, Dimon served as Chief Operating Officer of both Travelers and Smith Barney, eventually becoming Co-Chairman and Co-CEO of the combined brokerage following the 1997 merger of Smith Barney and Salomon Brothers. When Travelers Group merged with Citicorp in 1998 to form Citigroup, Dimon was named president of the newly created financial services giant. However, his tenure proved short-lived; he departed later that year following a conflict with Weill over leadership succession.

This professional setback led to what would become one of the defining chapters of Dimon’s career. In 2000, he was appointed CEO of Bank One, a struggling institution that required substantial turnaround efforts. When JPMorgan Chase merged with Bank One in July 2004, Dimon became president and chief operating officer of the combined entity. He assumed the role of CEO on 1 January 2006, and one year later was named Chairman of the Board.

Under Dimon’s leadership, JPMorgan Chase navigated the 2008 financial crisis with relative success, earning him recognition as one of the few banking chiefs to emerge from the period with an enhanced reputation. As Duff McDonald wrote in his 2009 book “Last Man Standing: The Ascent of Jamie Dimon and JPMorgan Chase,” whilst much of the crisis stemmed from “plain old avarice and bad judgment,” Dimon and JPMorgan Chase “stood apart,” embodying “the values of clarity, consistency, integrity, and courage”.

Not all has been smooth sailing. In May 2012, JPMorgan Chase reported losses of at least $2 billion from trades that Dimon characterised as “flawed, complex, poorly reviewed, poorly executed and poorly monitored”—an episode that became known as the “London Whale” incident and attracted investigations from the Federal Reserve, SEC and FBI. In May 2023, Dimon testified under oath in lawsuits accusing the bank of serving Jeffrey Epstein, the late sex offender who was a client between 1998 and 2013.

Dimon’s political evolution reflects a pragmatic centrism. Having donated more than $500,000 to Democratic candidates between 1989 and 2009 and maintained close ties to the Obama administration, he later distanced himself from strict partisan identification. “My heart is Democratic,” he told CNBC in 2019, “but my brain is kind of Republican.” He primarily identifies as a “capitalist” and a “patriot,” and served on President Donald Trump’s short-lived business advisory council before Trump disbanded it in 2017. Though he confirmed in 2016 that he would “love to be president,” he deemed a campaign “too hard and too late” and ultimately decided against serious consideration of a 2020 run. In 2024, he endorsed Nikki Haley in the Republican primary before speaking more positively about Trump following Haley’s defeat.

As of May 2025, Forbes estimated Dimon’s net worth at $2.5 billion. He serves on the boards of numerous organisations, including the Business Roundtable, Bank Policy Institute and Harvard Business School, whilst also sitting on the executive committee of the Business Council and the Partnership for New York City.

Leading Theorists on AI and Labour Displacement

The question of how artificial intelligence will reshape employment has occupied economists, technologists and social theorists for decades, producing a rich body of work that frames Dimon’s observations within broader academic and policy debates.

John Maynard Keynes introduced the concept of “technological unemployment” in his 1930 essay “Economic Possibilities for our Grandchildren,” arguing that society was “being afflicted with a new disease” caused by “our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour.” Keynes predicted this would be a temporary phase, ultimately leading to widespread prosperity and reduced working hours. His framing established the foundation for understanding technological displacement as a transitional phenomenon requiring societal adaptation rather than permanent catastrophe.

Joseph Schumpeter developed the theory of “creative destruction” in his 1942 work “Capitalism, Socialism and Democracy,” arguing that innovation inherently involves the destruction of old economic structures alongside the creation of new ones. Schumpeter viewed this process as the essential fact about capitalism—not merely a side effect but the fundamental engine of economic progress. His work provides the theoretical justification for Dimon’s observation about the internet bubble: widespread failure and waste can coexist with transformative innovation and aggregate productivity gains.

Wassily Leontief, winner of the 1973 Nobel Prize in Economics, warned in 1983 that workers might follow the path of horses, which were displaced en masse by automobable and tractor technology in the early twentieth century. His input-output economic models attempted to trace how automation would ripple through interconnected sectors, suggesting that technological displacement might be more comprehensive than previous episodes. Leontief’s scepticism about labour’s ability to maintain bargaining power against capital in an automated economy presaged contemporary concerns about inequality and the distribution of AI’s benefits.

Erik Brynjolfsson and Andrew McAfee at MIT have produced influential work on digital transformation and employment. Their 2014 book “The Second Machine Age” argued that we are in the early stages of a transformation as profound as the Industrial Revolution, with digital technologies now able to perform cognitive tasks previously reserved for humans. They coined the term “skill-biased technological change” to describe how modern technologies favour workers with higher levels of education and adaptability, potentially exacerbating income inequality. Their subsequent work on “machine learning” and “AI and the modern productivity paradox” has explored why measured productivity gains have lagged behind apparent technological advances—a puzzle relevant to Dimon’s observation that some AI benefits are difficult to quantify precisely.

Daron Acemoglu at MIT has challenged technological determinism, arguing that the impact of AI on employment depends crucially on how the technology is designed and deployed. In his 2019 paper “Automation and New Tasks: How Technology Displaces and Reinstates Labor” (co-authored with Pascual Restrepo), Acemoglu distinguished between automation that merely replaces human labour and technologies that create new tasks and roles. He has advocated for “human-centric AI” that augments rather than replaces workers, and has warned that current tax structures and institutional frameworks may be biasing technological development towards excessive automation. His work directly addresses Dimon’s categorisation of AI applications: some will enhance existing jobs, others will eliminate them, and the balance between these outcomes is not predetermined.

Carl Benedikt Frey and Michael Osborne at Oxford produced a widely cited 2013 study estimating that 47 per cent of US jobs were at “high risk” of automation within two decades. Their methodology involved assessing the susceptibility of 702 occupations to computerisation based on nine key bottlenecks, including creative intelligence, social intelligence and perception and manipulation. Whilst their headline figure attracted criticism for potentially overstating the threat—since many jobs contain a mix of automatable and non-automatable tasks—their framework remains influential in assessing which roles face displacement pressure.

Richard Freeman at Harvard has explored the institutional and policy responses required to manage technological transitions, arguing that the distribution of AI’s benefits depends heavily on labour market institutions, educational systems and social policy choices. His work emphasises that historical episodes of technological transformation involved substantial political conflict and institutional adaptation, suggesting that managing AI’s impact will require deliberate policy interventions rather than passive acceptance of market outcomes.

Shoshana Zuboff at Harvard Business School has examined how digital technologies reshape not merely what work is done but how it is monitored, measured and controlled. Her concept of “surveillance capitalism” highlights how data extraction and algorithmic management may fundamentally alter the employment relationship, potentially creating new forms of workplace monitoring and performance pressure even for workers whose jobs are augmented rather than eliminated by AI.

Klaus Schwab, founder of the World Economic Forum, has framed current technological change as the “Fourth Industrial Revolution,” characterised by the fusion of technologies blurring lines between physical, digital and biological spheres. His 2016 book of the same name argues that the speed, scope and systems impact of this transformation distinguish it from previous industrial revolutions, requiring unprecedented coordination between governments, businesses and civil society.

The academic consensus, insofar as one exists, suggests that AI will indeed transform employment substantially, but that the nature and distributional consequences of this transformation remain contested and dependent on institutional choices. Dimon’s advice to avoid “putting your head in the sand” and to stay “way ahead of the curve” aligns with this literature’s emphasis on anticipatory adaptation. His commitment to retraining and redeployment echoes the policy prescriptions of economists who argue that managing technological transitions requires active human capital investment rather than passive acceptance of labour market disruption.

What distinguishes Dimon’s perspective is his position as a practitioner implementing these technologies at scale within a major institution. Whilst theorists debate aggregate employment effects and optimal policy responses, Dimon confronts the granular realities of deployment: which specific functions can be augmented versus automated, how managers adapt their decision-making processes, what training programmes prove effective, and how to balance efficiency gains against workforce morale and capability retention. His assertion that JPMorgan has achieved approximately $2 billion in quantifiable benefits from $2 billion in annual AI spending—whilst acknowledging additional unquantifiable improvements—provides an empirical data point for theories about AI’s productivity impact.

The ten-year timeframe of JPMorgan’s AI journey also matters. Dimon’s observation that “people think it’s a new thing” but that the bank has been pursuing AI since 2012 challenges narratives of sudden disruption, instead suggesting a more gradual but accelerating transformation. This accords with Brynjolfsson and McAfee’s argument about the “productivity J-curve”—that the full economic benefits of transformative technologies often arrive with substantial lag as organisations learn to reconfigure processes and business models around new capabilities.

Ultimately, Dimon’s warning about job displacement, combined with his emphasis on staying ahead of the curve through retraining and redeployment, reflects a synthesis of Schumpeterian creative destruction, human capital theory, and practical experience managing technological change within a complex organisation. His perspective acknowledges both the inevitability of disruption and the possibility of managing transitions to benefit both institutions and workers—provided leadership acts proactively rather than reactively. For financial services professionals and business leaders more broadly, Dimon’s message is clear: AI’s impact on employment is neither hypothetical nor distant, but rather an ongoing transformation requiring immediate and sustained attention.

read more
Quote: Yann LeCun – Chief AI Scientist at Meta

Quote: Yann LeCun – Chief AI Scientist at Meta

“Before we reach human-level AI, we will have to reach cat-level AI and dog-level AI.” – Yann LeCun – Chief AI Scientist at Meta

Yann LeCun, a pioneering figure in artificial intelligence, is globally recognized for his foundational contributions to deep learning and neural networks. As the Chief AI Scientist at Meta (formerly Facebook) and a Silver Professor at New York University’s Courant Institute, LeCun has been instrumental in advancing technologies that underlie today’s AI systems, including convolutional neural networks (CNNs), which are now fundamental to image and pattern recognition in both industry and research.

LeCun’s journey in AI began in the late 1980s, when much of the scientific community considered neural networks to be a dead end. Undeterred, LeCun, alongside peers such as Geoffrey Hinton and Yoshua Bengio, continued to develop these models, ultimately proving their immense value. His early successes included developing neural networks capable of recognizing handwritten characters—a technology that became widely used by banks for automated check reading by the late 1990s.This unwavering commitment to neural networks earned LeCun, Hinton, and Bengio the 2018 Turing Award, often dubbed the “Nobel Prize of Computing,” and solidified their standing as the “Godfathers of AI”.

The quote, “Before we reach human-level AI, we will have to reach cat-level AI and dog-level AI,” encapsulates LeCun’s pragmatic approach to artificial intelligence. He emphasizes that replicating the full suite of human cognitive abilities is a long-term goal—one that cannot be achieved without first creating machines that can perceive, interpret, and interact with the world with the flexibility, intuition, and sensory-motor integration seen in animals like cats and dogs. Unlike current AI, which excels in narrow, well-defined tasks, a cat or a dog can navigate complex, uncertain environments, learn from limited experience, and adapt fluidly—capabilities that still elude artificial agents. LeCun’s perspective highlights the importance of incremental progress in AI: only by mastering the subtleties of animal intelligence can we aspire to build machines that match or surpass human cognition.

LeCun’s work continues to shape how researchers and industry leaders think about the future of AI—not as an overnight leap to artificial general intelligence, but as a gradual journey through, and beyond, the marvels of natural intelligence found throughout the animal kingdom.

read more
Term: AI Inference

Term: AI Inference

AI inference refers to the process in which a trained artificial intelligence (AI) or machine learning model analyzes new, unseen data to make predictions or decisions. After a model undergoes training—learning patterns, relationships, or rules from labeled datasets—it enters the inference phase, where it applies that learned knowledge to real-world situations or fresh inputs.

This process typically involves the following steps:

  • Training phase: The model is exposed to large, labeled datasets (for example, images with known categories), learning to recognize key patterns and features.
  • Inference phase: The trained model receives new data (such as an unlabeled image) and applies its knowledge to generate a prediction or decision (like identifying objects within the image).

AI inference is fundamental because it operationalizes AI, enabling it to be embedded into real-time applications such as voice assistants, autonomous vehicles, medical diagnosis tools, and fraud detection systems. Unlike the resource-intensive training phase, inference is generally optimized for speed and efficiency—especially important for tasks on edge devices or in situations requiring immediate results.

As generative and agent-based AI applications mature, the demand for faster and more scalable inference is rapidly increasing, driving innovation in both software and hardware to support these real-time or high-volume use cases.

A major shift in AI inference is occurring as new elements—such as test time compute (TTC), chain-of-thought reasoning, and adaptive inference—reshape how and where computational resources are allocated in AI systems.

Expanded Elements in AI Inference

  • Test-Time Compute (TTC): This refers to the computational effort expended during inference rather than during initial model training. Traditionally, inference consisted of a single, fast forward pass through the model, regardless of the complexity of the question. Recent advances, particularly in generative AI and large language models, involve dynamically increasing compute at inference time for more challenging problems. This allows the model to “think harder” by performing additional passes, iterative refinement, or evaluating multiple candidate responses before selecting the best answer

  • Chain-of-Thought Reasoning: Modern inference can include step-by-step reasoning, where models break complex problems into sub-tasks and generate intermediate steps before arriving at a final answer. This process may require significantly more computation during inference, as the model deliberates and evaluates alternative solutions—mimicking human-like problem solving rather than instant pattern recognition.

  • Adaptive Compute Allocation: With TTC, AI systems can allocate more resources dynamically based on the difficulty or novelty of the input. Simple questions might still get an immediate, low-latency response, while complex or ambiguous tasks prompt the model to use additional compute cycles for deeper reasoning and improved accuracy.

Impact: Shift in Compute from Training to Inference

  • From Heavy Training to Intelligent Inference: The traditional paradigm put most of the computational burden and cost on the training phase, after which inference was light and static. With TTC and chain-of-thought reasoning, more computation shifts into the inference phase. This makes inference more powerful and flexible, allowing for real-time adaptation and better performance on complex, real-world tasks without the need for ever-larger model sizes.

  • Strategic and Operational Implications: This shift enables organizations to optimize resources by focusing on smarter, context-aware inference rather than continually scaling up training infrastructure. It also allows for more responsive AI systems that can improve decision-making and user experiences in dynamic environments.

  • Industry Adoption: Modern models from leading labs (such as OpenAI and Google’s Gemini) now support iterative, compute-intensified inference modes, yielding substantial gains on benchmarks and real-world applications, especially where deep reasoning or nuanced analysis is required.

These advancements in test time compute and reasoned inference mark a pivotal transformation in AI, moving from static, single-pass prediction to dynamic, adaptive, and resource-efficient problem-solving at the moment of inference.

Related strategy theorist: Yann LeCun

Yann LeCun is widely recognized as a pioneering theorist in neural networks and deep learning—the foundational technologies underlying modern AI inference. His contributions to convolutional neural networks and strategies for scalable, robust AI learning have shaped the current landscape of AI deployment and inference capabilities.

“AI inference is the core mechanism by which machine learning models transform training into actionable intelligence, supporting everything from real-time analysis to agent-based automation.”

Yann LeCun is a French-American computer scientist and a foundational figure in artificial intelligence, especially in the areas of deep learning, computer vision, and neural networks. Born on July 8, 1960, in Soisy-sous-Montmorency, France, he received his Diplôme d’Ingénieur from ESIEE Paris in 1983 and earned his PhD in Computer Science from Sorbonne University (then Université Pierre et Marie Curie) in 1987. His doctoral research introduced early methods for back-propagation in neural networks, foreshadowing the architectures that would later revolutionize AI.

LeCun began his research career at the Centre National de la Recherche Scientifique (CNRS) in France, focusing on computer vision and image recognition. His expertise led him to postdoctoral work at the University of Toronto, where he collaborated with other leading minds in neural networks. In 1988, he joined AT&T Bell Laboratories in New Jersey, eventually becoming head of the Image Processing Research Department. There, LeCun led the development of convolutional neural networks (CNNs), which became the backbone for modern image and speech recognition systems. His technology for handwriting and character recognition was widely adopted in banking, reading a significant share of checks in the U.S. in the early 2000s.

LeCun also contributed to the creation of DjVu, a high-efficiency image compression technology, and the Lush programming language. In 2003, he became a professor at New York University (NYU), where he founded the NYU Center for Data Science, advancing interdisciplinary AI research.

In 2013, LeCun became Director of AI Research at Facebook (now Meta), where he leads the Facebook AI Research (FAIR) division, focusing on both theoretical and applied AI at scale. His leadership at Meta has pushed forward advancements in self-supervised learning, agent-based systems, and the practical deployment of deep learning technologies.

LeCun, along with Yoshua Bengio and Geoffrey Hinton, received the 2018 Turing Award—the highest honor in computer science—for his pioneering work in deep learning. The trio is often referred to as the “Godfathers of AI” for their collective influence on the field.

 

Yann LeCun’s Thinking and Approach

LeCun’s intellectual focus is on building intelligent systems that can learn from data efficiently and with minimal human supervision. He strongly advocates for self-supervised and unsupervised learning as the future of AI, arguing that these approaches best mimic how humans and animals learn. He believes that for AI to reach higher forms of reasoning and perception, systems must be able to learn from raw, unlabeled data and develop internal models of the world.

LeCun is also known for his practical orientation—developing architectures (like CNNs) that move beyond theory to solve real-world problems efficiently. His thinking consistently emphasizes the importance of scaling AI not just through bigger models, but through more robust, data-efficient, and energy-efficient algorithms.

He has expressed skepticism about narrow, brittle AI systems that rely heavily on supervised learning and excessive human labeling. Instead, he envisions a future where AI agents can learn, reason, and plan with broader autonomy, similar to biological intelligence. This vision guides his research and strategic leadership in both academia and industry.

LeCun remains a prolific scientist, educator, and spokesperson for responsible and open AI research, championing collaboration and the broad dissemination of AI knowledge.

read more
Quote: Andrew Ng – AI Guru

Quote: Andrew Ng – AI Guru

“For the majority of businesses, focus on building applications using agentic workflows rather than solely scaling traditional AI. That’s where the greatest opportunity lies.” – Andrew Ng – AI Guru

Andrew Ng is widely recognized as a pioneering figure in artificial intelligence, renowned for his roles as co-founder of Google Brain, former chief scientist at Baidu, and founder of DeepLearning.AI and Landing AI. His work has shaped the trajectory of modern AI, influencing its academic, industrial, and entrepreneurial development on a global scale.

The quote “For the majority of businesses, focus on building applications using agentic workflows rather than solely scaling traditional AI. That’s where the greatest opportunity lies.” captures a key transformation underway in how organizations approach AI adoption. Ng delivered this insight during a Luminary Talk at the Snowflake Summit in June 2024, in a discussion centered on the rise of agentic workflows within AI applications.

Historically, businesses have harnessed AI by leveraging static, rule-based automation or applying large language models to single-step tasks—prompting a system to generate a document or answer a question in one go. Ng argues this paradigm is now giving way to a new era driven by AI agents capable of multi-step reasoning, planning, tool use, and collaboration—what he terms “agentic workflows”.

Agentic workflows differ from traditional approaches by allowing autonomous AI agents to adapt, break down complex projects, and iterate in real time, much as a human team might tackle a multifaceted problem. For example, instead of a single prompt generating a sales report, an AI agent in an agentic workflow could gather the relevant data, perform analysis, adjust its approach based on interim findings, and refine the output after successive rounds of review and self-critique. Ng has highlighted design patterns such as reflection, planning, multi-agent collaboration, and dynamic tool use as central to these workflows.

Ng’s perspective is that businesses stand to gain the most not merely from increasing the size or data intake of AI models, but from designing systems where AI agents can independently coordinate and accomplish sophisticated goals. He likens this shift to the leap from single-threaded to multi-threaded computing, opening up exponential gains in capability and value creation.

For business leaders, Andrew Ng’s vision offers a roadmap: the frontier of competitive advantage lies in reimagining how AI-powered agents are integrated into business processes, unlocking new possibilities for efficiency, innovation, and scalability that go beyond what traditional, “one-shot” AI can deliver.

Ng continues to lead at the intersection of AI innovation and practical business strategy, championing agentic AI as the next great leap for organizations seeking to realize the full promise of artificial intelligence.

read more
Term: AI Agents

Term: AI Agents

AI Agents are autonomous software systems that interact with their environment, perceive data, and independently make decisions and take actions to achieve specific, user-defined goals. Unlike traditional software, which follows static, explicit instructions, AI agents are guided by objective functions and have the ability to reason, learn, plan, adapt, and optimize responses based on real-time feedback and changing circumstances.

Key characteristics of AI agents include:

  • Autonomy: They can initiate and execute actions without constant human direction, adapting as new data or situations arise.
  • Rational decision-making: AI agents use data and perceptions of their environment to select actions that maximize predefined goals or rewards (their “objective function”), much like rational agents in economics.
  • Learning and Adaptation: Through techniques like machine learning, agents improve their performance over time by learning from experience.
  • Multimodal abilities: Advanced agents process various types of input/output—text, audio, video, code, and more—and often collaborate with humans or other agents to complete complex workflows or transactions.
  • Versatility: They range from simple (like thermostats) to highly complex systems (like conversational AI assistants or autonomous vehicles).

Examples include virtual assistants that manage calendars or customer support, code-review bots in software development, self-driving cars navigating traffic, and collaborative agents that orchestrate business processes.

Related Strategy Theorist – Stuart Russell

As a renowned AI researcher and co-author of the seminal textbook “Artificial Intelligence: A Modern Approach,” Russell has shaped foundational thinking on agent-based systems and rational decision-making. He has also been at the forefront of advocating for the alignment of agent objectives with human values, providing strategic frameworks for deploying autonomous agents safely and effectively across industries.

read more
Quote: Ilya Sutskever – Safe Superintelligence

Quote: Ilya Sutskever – Safe Superintelligence

“AI will do all the things that we can do. Not just some of them, but all of them. The big question is what happens then: Those are dramatic questions… the rate of progress will become really extremely fast for some time at least, resulting in unimaginable things. And in some sense, whether you like it or not, your life is going to be affected by AI to a great extent.” –  Ilya Sutskever – Safe Superintelligence

Ilya Sutskever stands among the most influential figures shaping the modern landscape of artificial intelligence. Born in Russia and raised in Israel and Canada, Sutskever’s early fascination with mathematics and computer programming led him to the University of Toronto, where he studied under the legendary Geoffrey Hinton. His doctoral work broke new ground in deep learning, particularly in developing recurrent neural networks and sequence modeling—technologies that underpin much of today’s AI-driven language and translation systems.

Sutskever’s career is marked by a series of transformative achievements. He co-invented AlexNet, a neural network that revolutionized computer vision and triggered the deep learning renaissance. At Google Brain, he advanced sequence-to-sequence models, laying the foundation for breakthroughs in machine translation. As a co-founder and chief scientist at OpenAI, Sutskever played a pivotal role in developing the GPT series of language models, which have redefined what machines can achieve in natural language understanding and generation.

Beyond his technical contributions, Sutskever is recognized for his thought leadership on the societal implications of AI. He has consistently emphasized the unpredictable nature of advanced AI systems, particularly as they acquire reasoning capabilities that may outstrip human understanding. His recent work focuses on AI safety and alignment, co-founding Safe Superintelligence Inc. to ensure that future superintelligent systems act in ways beneficial to humanity.

The quote featured today encapsulates Sutskever’s vision: a world where AI’s capabilities will extend to all domains of human endeavor, bringing about rapid and profound change. For business leaders and strategists, his words are both a warning and a call to action—highlighting the necessity of anticipating technological disruption and embracing innovation at a pace that matches AI’s accelerating trajectory.

read more
Term: Artificial General Intelligence (AGI)

Term: Artificial General Intelligence (AGI)

Artificial General Intelligence (AGI) is defined as a form of artificial intelligence that can understand, learn, and apply knowledge across the full spectrum of human cognitive tasks—matching or even exceeding human capabilities in any intellectual endeavor. Unlike current artificial intelligence systems, which are typically specialized (known as narrow AI) and excel only in specific domains such as language translation or image recognition, AGI would possess the versatility and adaptability of the human mind.

AGI enables machines to perform essentially all human cognitive tasks at or above top human expert level, acquire new skills, and transfer its capabilities to entirely new domains, embodying a level of intelligence no single human possesses—rather, it would represent the combined expertise of top minds across all fields.

Alternative Name – Superintelligence:
The term superintelligence or Artificial Superintelligence (ASI) refers to an intelligence that not only matches but vastly surpasses human abilities in virtually every aspect. While AGI is about equaling human-level intelligence, superintelligence describes systems that can independently solve problems, create knowledge, and innovate far beyond even the best collective human intellect.

 
Level
Description
Narrow AI
Specialized systems that perform limited tasks (e.g., playing chess, image recognition)
AGI
Systems with human-level cognitive abilities across all domains, adaptable and versatile
Superintelligence
Intelligence that exceeds human capabilities in all domains, potentially by wide margins

Key contrasts between AGI and (narrow) AI:

  • Scope: AGI can generalize across different tasks and domains; narrow AI is limited to narrowly defined problems.
  • Learning and Adaptation: AGI learns and adapts to new situations much as humans do, while narrow AI cannot easily transfer skills to new, unfamiliar domains.
  • Cognitive Sophistication: AGI mimics the full range of human intelligence; narrow AI does not.
 

Strategy Theorist — Ilya Sutskever:
Ilya Sutskever is a leading figure in the pursuit of AGI, known for his foundational contributions to deep learning and as a co-founder of OpenAI. Sutskever’s work focuses on developing models that move beyond narrow applications toward truly general intelligence, shaping both the technical roadmap and ethical debate around AGI’s future.

Ilya Sutskever’s views on the impact of superintelligence are characterized by a blend of optimism for its transformative potential and deep caution regarding its unpredictability and risks. Sutskever believes superintelligence could revolutionize industries, particularly healthcare, and deliver unprecedented economic, social, and scientific breakthroughs within the next decade. He foresees AI as a force that can solve complex problems and dramatically extend human capabilities. For business, this implies radical shifts: automating sophisticated tasks, generating new industries, and redefining competitive advantages as organizations adapt to a new intelligence landscape.

However, Sutskever consistently stresses that the rise of superintelligent AI is “extremely unpredictable and unimaginable,” warning that its self-improving nature could quickly move beyond human comprehension and control. He argues that while the rewards are immense, the risks—including loss of human oversight and the potential for misuse or harm—demand proactive, ethical, and strategic guidance. Sutskever champions the need for holistic thinking and interdisciplinary engagement, urging leaders and society to prepare for AI’s integration not with fear, but with ethical foresight, adaptation, and resilience.

He has prioritized AI safety and “superalignment” as central to his strategies, both at OpenAI and through his new Safe Superintelligence venture, actively seeking mechanisms to ensure that the economic and societal gains from superintelligence do not come at unacceptable risks. Sutskever’s message for corporate leaders and policymakers is to engage deeply with AI’s trajectory, innovate responsibly, and remain vigilant about both its promise and its perils.

In summary, AGI is the milestone where machines achieve general, human-equivalent intelligence, while superintelligence describes a level of machine intelligence that greatly surpasses human performance. The pursuit of AGI, championed by theorists like Ilya Sutskever, represents a profound shift in both the potential and challenges of AI in society.

read more
Quote:  Tom Davenport — Academic, consultant, author

Quote: Tom Davenport — Academic, consultant, author

“AI doesn’t replace strategic thinking—it accelerates it.” — Tom Davenport — Academic, consultant, author

Tom Davenport’s quote captures the essence of the relationship between human judgment and advances in artificial intelligence. Davenport, a leading authority on analytics and business process innovation, has spent decades studying how organizations make decisions and adopt new technologies.

As AI systems have rapidly evolved—from early rule-based approaches to today’s powerful generative models—their promise is often misunderstood. Some fear AI might make human thinking obsolete, especially in complex arenas like strategy. Davenport has consistently challenged this notion. He argues that AI’s true value lies in amplifying, not eliminating, the need for rigorous, creative, and forward-looking thought. AI is a tool that enables strategists to test more ideas, analyze larger datasets, and see farther into future possibilities—but it is strategic thinking, shaped by human experience and ambition, that guides AI toward meaningful goals.

Davenport’s perspective is grounded in his extensive work with businesses and his scholarship at leading universities. In his conversations and writings, he notes that while AI democratizes access to information and automates routine analysis, a competitive edge still hinges on asking the right questions and crafting distinctive strategies. The leaders who thrive in the AI era are those who learn to harness its speed and breadth, using it to accelerate the cycles of planning, validation, and innovation rather than replace the uniquely human qualities of insight and judgment.

About Tom Davenport

Tom Davenport, born in 1954, is an influential American academic, business consultant, and author. He specializes in analytics, business process innovation, and knowledge management. Davenport is well-known for his pioneering books such as Competing on Analytics and his widely-cited research on how organizations create value from data. Affiliated with prestigious institutions, he has helped shape how leaders think about information, technology, and business transformation.

Davenport’s views on AI are informed by years of advising Fortune 500 companies, conducting academic research, and contributing to thought leadership at the intersection of technology and management. His insights have been instrumental in helping organizations adapt to the changing landscape of digital innovation, emphasizing that technology serves best when paired with human creativity, analytical rigor, and strategic vision

read more
Quote:  Ginni Rometty, Former IBM CEO

Quote: Ginni Rometty, Former IBM CEO

“Artificial intelligence is not a strategy, but a means to rethink your strategy.” — Ginni Rometty, Former IBM CEO

Ginni Rometty’s statement, “Artificial intelligence is not a strategy, but a means to rethink your strategy,” emerged from her front-row vantage point in one of the era’s most significant technological transformations. As the first woman to serve as chairman, president, and CEO of IBM, Rometty’s nearly four-decade career at the company offers a compelling backdrop to her insight.

Her leadership at IBM began in 2012, at a time when the company confronted industry-wide disruption driven by the rise of cloud computing, big data, and artificial intelligence. Rometty recognized early on that AI—while transformative—was not a plug-and-play solution, but a set of tools that could empower organizations to fundamentally reshape their approaches to competition, operations, and growth. This realization guided IBM’s pivot toward cognitive computing, analytics, and cloud-based solutions during her tenure.

A defining episode during Rometty’s leadership was IBM’s acquisition of the open-source powerhouse Red Hat for $34 billion—a strategic move to anchor IBM’s transition into the cloud era and enable clients to rethink how they deliver value in increasingly digital markets. Throughout these changes, Rometty was adamant: adopting technologies like AI is not an end in itself but a catalyst for critically reexamining and reinventing business strategies.

The quote distills her conviction that simply acquiring cutting-edge technology is not sufficient. Instead, success depends on leaders’ willingness to challenge old assumptions and design new strategies that fully leverage the potential of AI. Rometty’s perspective, forged by navigating IBM through turbulent shifts, underscores the necessity of using innovation to reimagine, not merely digitize, the future of enterprise.

About Ginni Rometty

Ginni Rometty, born in 1957, joined IBM as a systems engineer in 1981 and steadily advanced through key leadership roles—culminating in her appointment as CEO from 2012 to 2020. During her tenure, she spearheaded bold decisions: negotiating the purchase of PricewaterhouseCoopers’ IT consulting business in 2002, prioritizing investments in cloud, analytics, and cognitive computing, and repositioning IBM for the demands and opportunities of the modern digital landscape.

Her leadership style and vision earned her recognition among Bloomberg’s 50 Most Influential People in the World, Fortune’s “50 Most Powerful Women in Business,” and Forbes’ Top 50 Women in Tech. While her tenure included periods of financial challenge and criticism over IBM’s performance, Rometty’s overarching legacy is her focus on transformation—seeing technology as a lever for reinventing strategy, not merely executing it.

This context enriches the meaning of her quote, highlighting its origins in both lived experience and hard-won leadership insight.

read more
Quote:  Andrew Ng, AI guru

Quote: Andrew Ng, AI guru

“In the age of AI, strategy is no longer just about where to play; it’s about how to adapt.” — Andrew Ng, AI guru

This quote from Andrew Ng captures a profound shift in how organizations and leaders must approach strategy in the era of artificial intelligence. Traditionally, strategic planning has focused on identifying the right markets, customers, or products—the “where to play” aspect. However, as AI rapidly transforms industries, Ng argues that the ability to adapt to ongoing technological changes has become just as crucial, if not more so.

The background for this perspective stems from Ng’s deep involvement in the practical deployment of AI at scale. With advances in machine learning and automation, the competitive landscape is continuously evolving. It is no longer enough to set a single strategic direction; leaders need to develop organizational agility to embrace new technologies and iterate their models, processes, and offerings in response to rapid change. Ng’s message emphasizes that AI is not a static tool, but a disruptive force that requires companies to rethink how they respond to uncertainty and opportunity. This shift from fixed planning to adaptive learning mirrors the very nature of AI systems themselves, which are designed to learn, update, and improve over time.

Ng’s insight also reflects his broader view that AI should be used to automate routine tasks, freeing up human talent to focus on creative, strategic, and adaptive functions. As such, the modern strategic imperative is about continually repositioning and reinventing—not just staking out a position and defending it.

About Andrew Ng

Andrew Ng is one of the world’s most influential figures in artificial intelligence and machine learning. Born in 1976, he is a British-American computer scientist and technology entrepreneur. Ng co-founded Google Brain, where he played a pivotal role in advancing deep learning research, and later served as Chief Scientist at Baidu, leading a large AI group. He is also a prominent educator, co-founding Coursera and creating widely popular online courses that have democratized access to AI knowledge for millions worldwide.

Ng has consistently advocated for practical, human-centered adoption of AI. He introduced the widely referenced idea that “AI is the new electricity,” underscoring its foundational and transformative impact across industries. He has influenced both startups and established enterprises through initiatives such as Landing AI and the AI Fund, which focus on applying AI to real-world problems and fostering AI entrepreneurship.

Andrew Ng is known for his clear communication and balanced perspective on the opportunities and challenges of AI. Recognized globally for his contributions, he has been named among Time magazine’s 100 Most Influential People and continues to shape the trajectory of AI through his research, teaching, and thought leadership. His work encourages businesses and individuals alike to not only adopt AI technologies, but to cultivate the adaptability and critical thinking needed to thrive in an age of constant change.

read more
Quote: Daniel Kahneman, Nobel Laureate

Quote: Daniel Kahneman, Nobel Laureate

“AI is great at multitasking: it can misunderstand five tasks at once.” — Daniel Kahneman, Nobel Laureate

This wry observation from Daniel Kahneman highlights the persistent gap between expectation and reality in the deployment of artificial intelligence. As AI systems increasingly promise to perform multiple complex tasks—ranging from analyzing data and interpreting language to making recommendations—there remains a tendency to overestimate their capacity for genuine understanding. Kahneman’s quote playfully underscores how, far from being infallible, AI can compound misunderstandings when juggling several challenges simultaneously.

The context for this insight is rooted in Kahneman’s lifelong exploration of the limits of decision-making—first in humans, and, by extension, in the systems designed to emulate or augment human judgment. AI’s appeal often stems from its speed and apparent ability to handle many tasks at once. However, as with human cognition, multitasking can amplify errors if the underlying comprehension is lacking or the input data is ambiguous. Kahneman’s expertise in uncovering the predictable errors and cognitive biases that affect human reasoning makes his skepticism toward AI’s supposed multitasking prowess particularly telling. The remark serves as a reminder to remain critical and measured in evaluating AI’s true capabilities, especially in contexts where precision and nuance are essential.

About Daniel Kahneman

Daniel Kahneman (1934–2024) was an Israeli-American psychologist whose groundbreaking work revolutionized the understanding of human judgment, decision-making, and the psychology of risk. Awarded the 2002 Nobel Memorial Prize in Economic Sciences, he was recognized “for having integrated insights from psychological research into economic science, especially concerning human judgment and decision-making under uncertainty”.

Together with collaborator Amos Tversky, Kahneman identified a series of cognitive heuristics and biases—systematic errors in thinking that affect the way people judge probabilities and make decisions. Their work led to the development of prospect theory, which challenged the traditional economic view that humans are rational actors, and established the foundation of behavioral economics.

Kahneman’s research illuminated how individuals routinely overgeneralize from small samples, fall prey to stereotyping, and exhibit overconfidence—even when handling simple probabilities. His influential book, Thinking, Fast and Slow, distilled decades of research into a compelling narrative about how the mind works, the pitfalls of intuition, and the enduring role of error in human reasoning.

In his later years, Kahneman continued to comment on the limitations of decision-making processes, increasingly turning his attention to how these limits inform the development and evaluation of artificial intelligence. His characteristic blend of humor and rigor, as exemplified in the quoted observation about AI multitasking, continues to inspire thoughtful scrutiny of technology and its role in society.

read more
Quote:  Andrew Ng, AI guru

Quote: Andrew Ng, AI guru

“AI is like teenage sex—everyone talks about it, nobody really knows how to do it.” — Andrew Ng, AI guru

Andrew Ng, captures the sense of hype, confusion, and uncertainty that has often surrounded artificial intelligence (AI) in recent years. Delivered with humor, it reflects the atmosphere in which AI has become a buzzword: widely discussed in boardrooms, newsrooms, and tech circles, yet rarely understood in its real-world applications or complexities.

The backdrop to this quote is the rapid growth in public and corporate interest in AI. From the early days of AI research in the mid-20th century, the field has experienced cycles of intense excitement (“AI springs”) and subsequent setbacks (“AI winters”), often fueled by unrealistic expectations and misunderstanding of the technology’s actual capabilities. In the last decade, as machine learning and deep learning began to make headlines with breakthroughs in image recognition, natural language processing, and game-playing, many organizations felt pressure to claim they were leveraging AI—regardless of whether they truly understood how to implement it or what it could achieve.

Ng’s remark wittily punctures the inflated discourse by suggesting that, like teenage sex, the reality of AI is far less straightforward than the bravado implies. It serves as both a caution and an invitation: to move beyond surface-level conversations and focus instead on genuine understanding and effective implementation.

About Andrew Ng

Andrew Ng is one of the most influential figures in artificial intelligence and machine learning. He is known for his clear-eyed optimism and his ability to communicate complex technical ideas in accessible language. Ng co-founded Google Brain, led Baidu’s AI Group, and launched the pioneering online machine learning course on Coursera, which has introduced AI to millions worldwide.

Ng frequently emphasizes AI’s transformative potential, famously stating that “AI is the new electricity”—suggesting that, much like electricity revolutionized industries in the past, AI will fundamentally change every sector in the coming decades. Beyond technical achievement, he advocates for practical and responsible adoption of AI, striving to bridge the gap between hype and meaningful progress.

His humorous comparison of AI discourse to teenage sex has become a memorable and oft-cited line at technology conferences and in articles. It encapsulates not only the social dynamics at play in emerging technological fields, but also Ng’s approachable style and his drive to demystify artificial intelligence for a broader audience

read more
Quote: Satya Nadella, Chairman and CEO of Microsoft

Quote: Satya Nadella, Chairman and CEO of Microsoft

“Somebody said to me once, … ‘You don’t get fit by watching others go to the gym. You have to go to the gym.’” – Satya Nadella, the Chairman and CEO of Microsoft

The quote—“Somebody said to me once, … ‘You don’t get fit by watching others go to the gym. You have to go to the gym.’” — comes from an interview conducted immediately after Microsoft Build 2025, a flagship event that showcased the company’s vision for the agentic web and the next era of AI-powered productivity. Nadella used this metaphor to underscore a central pillar of his leadership philosophy: the necessity of hands-on engagement and personal transformation, rather than passive observation or reliance on case studies.

In the interview, Nadella reflected on how, during times of rapid technological change, the only way for organizations—and individuals—to adapt is through direct, committed participation. He emphasized that no amount of studying the successes of others can substitute for real-world experimentation, learning, and iteration. For Nadella, this approach is critical not only for businesses grappling with disruptive technologies, but also for professionals seeking to remain resilient and relevant.

Satya Nadella, Chairman and CEO of Microsoft, has long been recognized as the architect of Microsoft’s modern resurgence. Born in Hyderabad, India, in 1967, Nadella’s formative years combined a love for cricket with an early fascination for technology. He pursued electrical engineering in India before moving to the United States for graduate studies, laying the technical and managerial foundation that would define his career.

Joining Microsoft in 1992, Nadella rapidly advanced through various engineering and leadership roles. Early in his tenure, he played a key role in the development of Windows NT, setting the stage for his future focus on enterprise solutions. By the early 2010s, he had taken the helm of Microsoft’s cloud and enterprise initiatives, leading the creation and growth of Microsoft Azure—a service that would become a cornerstone of the company and one of the largest cloud platforms globally.

When he was appointed CEO in 2014, Microsoft faced a period of stagnation, with mounting internal competition, disappointing product launches, and declining morale. Nadella initiated a deliberate shift, championing a “cloud-first, mobile-first” strategy and transforming the company’s culture to prioritize collaboration, empathy, and a growth mindset. This new approach reinvigorated Microsoft, producing a decade of unprecedented innovation, market success, and making the company once again one of the world’s most valuable enterprises.

Announcements at Microsoft Build 2025

The Microsoft Build 2025 event marked a pivotal moment in the company’s AI strategy. Key announcements included:

  • The introduction of an “agentic web,” powered by collaborative AI agents embedded throughout the Microsoft ecosystem.
  • Deeper integration of AI into products like Microsoft 365 Copilot, Teams, and GitHub—enabling knowledge workers and developers to orchestrate complex workflows and automate repetitive tasks through AI-powered agents.
  • The rollout of Copilot fine-tuning, empowering enterprises to customize AI models with their proprietary data for a true competitive edge.
  • Demonstrations of “proactive agents” capable of autonomously interpreting intent and executing tasks across applications, further reducing the friction between user goals and technological execution.

These announcements illustrate the forward-leaning trajectory Nadella has set for Microsoft, blending technical prowess with an ethos of adaptability and continuous reinvention. His quote, situated in this context, is a rallying call: the future belongs to those willing to step into the arena, learn by doing, and transform alongside the technology they seek to harness.

read more
Quote: Sholto Douglas, Anthropic researcher

Quote: Sholto Douglas, Anthropic researcher

“We believe coding is extremely important because coding is that first step in which you will see AI research itself being accelerated… We think it is the most important leading indicator of model capabilities.”

Sholto Douglas, Anthropic researcher

Sholto Douglas is regarded as one of the most promising new minds in artificial intelligence research. Having graduated from the University of Sydney with a degree in Mechatronic (Space) Engineering under the guidance of Ian Manchester and Stefan Williams, Douglas entered the field of AI less than two years ago, quickly earning respect for his innovative contributions. At Anthropic, one of the leading AI research labs, he specializes in scaling reinforcement learning (RL) techniques within advanced language models, focusing on pushing the boundaries of what large language models can learn and execute autonomously.

Context of the Quote

The quote, delivered by Douglas in an interview with Redpoint—a venture capital firm known for its focus on disruptive startups and technology—underscores the central thesis driving Anthropic’s recent research efforts:

“We believe coding is extremely important because coding is that first step in which you will see AI research itself being accelerated… We think [coding is] the most important leading indicator of model capabilities.”

This statement reflects both the technical philosophy and the strategic direction of Anthropic’s latest research. Douglas views coding not only as a pragmatic benchmark but as a foundational skill that unlocks model self-improvement and, by extension, accelerates progress toward artificial general intelligence (AGI).

Claude 4 Launch: Announcements and Impact

Douglas’ remarks came just ahead of the public unveiling of Anthropic’s Claude 4, the company’s most sophisticated model to date. The event highlighted several technical milestones:

  • Reinforcement Learning Breakthroughs: Douglas described how, over the past year, RL techniques in language models had evolved from experimental to demonstrably successful, especially in complex domains like competitive programming and advanced mathematics. For the first time, they achieved “proof of an algorithm that can give us expert human reliability and performance, given the right feedback loop”.
  • Long-Term Vision: The launch positioned coding proficiency as the “leading indicator” for broader model capabilities, setting the stage for future models that can meaningfully contribute to their own research and improvement.
  • Societal Implications: Alongside the technical announcements, the event and subsequent interviews addressed how rapid advances in AI—exemplified by Claude 4—will impact industries, labor markets, and global policy, urging stakeholders to prepare for a world where AI agents are not just tools but collaborative problem-solvers.
 

Why This Moment Matters

Douglas’ focus on coding as a metric is rooted in the idea that tasks requiring deep logic and creative problem-solving, such as programming, provide a “canary in the coal mine” for model sophistication. Success in these domains demonstrates a leap not only in computational power or data processing, but in the ability of AI models to autonomously reason, plan, and build tools that further accelerate their own learning cycles.

The Claude 4 launch, and Douglas’ role within it, marks a critical inflection point in AI research. The ability of language models to code at—or beyond—expert human levels signals the arrival of AI systems capable of iteratively improving themselves, raising both hopes for extraordinary breakthroughs and urgent questions around safety, alignment, and governance.

Sholto Douglas’ Influence

Though relatively new to the field, Douglas has emerged as a thought leader shaping Anthropic’s approach to scalable, interpretable, and safe AI. His insights bridge technical expertise and strategic foresight, providing a clear-eyed perspective on the trajectory of rapidly advancing language models and their potential to fundamentally reshape the future of research and innovation.

read more
Quote: Jensen Huang, Nvidia CEO

Quote: Jensen Huang, Nvidia CEO

“AI inference token generation has surged tenfold in just one year, and as AI agents become mainstream, the demand for AI computing will accelerate. Countries around the world are recognizing AI as essential infrastructure – just like electricity and the internet.”

Jensen Huang, Nvidia CEO

Context: The Nvidia 2026 Q1 results

On May 28, 2025, NVIDIA announced its financial results for the first quarter of fiscal year 2026, reporting a record-breaking revenue of $44,1 billion, a 69% increase from the previous year. This surge was primarily driven by robust demand for AI chips, with the data center segment contributing significantly, achieving a 73% year-over-year revenue increase to $39,1 billion.

Despite these impressive figures, NVIDIA faced challenges due to U.S. export restrictions on its H20 chips to China, resulting in a $4,5 billion charge for excess inventory and an anticipated $8 billion revenue loss in the second quarter. During the earnings call, Huang criticized these restrictions, stating they have inadvertently spurred innovation in China rather than curbing it.

In the context of these developments, Huang remarked, “AI inference token generation has surged tenfold in just one year, and as AI agents become mainstream, the demand for AI computing will accelerate. Countries around the world are recognizing AI as essential infrastructure—just like electricity and the internet.” This statement underscores the transformative impact of AI across various sectors and highlights the critical role of AI infrastructure in modern economies.

Under Huang’s leadership, NVIDIA has not only achieved remarkable financial success but has also been at the forefront of AI and computing innovations. His strategic vision continues to shape the company’s trajectory, navigating complex international dynamics while driving technological progress.

Jensen Huang: Visionary Leader Behind Nvidia

Early Life and Education

Jensen Huang, born in Tainan, Taiwan, in 1963, immigrated to the United States at a young age. He pursued his undergraduate studies in electrical engineering at Oregon State University, earning a Bachelor of Science degree, and later completed a Master of Science in Electrical Engineering at Stanford University. Before founding Nvidia, Huang gained industry experience at LSI Logic and Advanced Micro Devices (AMD), building a foundation in semiconductor technology and business leadership.

Founding Nvidia and Early Struggles

In 1993, at the age of 30, Huang co-founded Nvidia with Chris Malachowsky and Curtis Priem. The company’s inception was humble—its first meetings took place in a local Denny’s restaurant. The early years were marked by intense challenges and uncertainty. Nvidia’s initial focus on graphics accelerator chips nearly led to its demise, with the company surviving on a critical $5 million investment from Sega. By 1997, Nvidia was just a month away from running out of payroll funds before the release of the RIVA 128 chip turned its fortunes around.

Huang’s leadership style was forged in these difficult times. He often reminded his team, “Our company is thirty days from going out of business,” a mantra that underscored the urgency and resilience required to survive in Silicon Valley’s fast-paced environment. Huang has credited these hardships as essential to his growth as a leader and to Nvidia’s eventual success.

Transforming the Tech Landscape

Under Huang’s stewardship, Nvidia pioneered the invention of the Graphics Processing Unit (GPU) in 1999, revolutionizing computer graphics and catalyzing the growth of the PC gaming industry. More recently, Nvidia has become a central player in the rise of artificial intelligence (AI) and accelerated computing, with its hardware and software platforms powering breakthroughs in data centers, autonomous vehicles, and generative AI.

Huang’s vision and execution have earned him widespread recognition, including election to the National Academy of Engineering, the Semiconductor Industry Association’s Robert N. Noyce Award, the IEEE Founder’s Medal, and inclusion in TIME magazine’s list of the 100 most influential people.

read more
Quote: Jensen Huang, Nvidia CEO

Quote: Jensen Huang, Nvidia CEO

“The question is not whether China will have AI, it already does.”

Jensen Huang, Nvidia CEO

Context: The Nvidia 2026 Q1 results

On May 28, 2025, NVIDIA announced its financial results for the first quarter of fiscal year 2026, reporting a record-breaking revenue of $44,1 billion, a 69% increase from the previous year. This surge was primarily driven by robust demand for AI chips, with the data center segment contributing significantly, achieving a 73% year-over-year revenue increase to $39,1 billion.

Despite these impressive figures, NVIDIA faced challenges due to U.S. export restrictions on its H20 chips to China, resulting in a $4,5 billion charge for excess inventory and an anticipated $8 billion revenue loss in the second quarter. During the earnings call, Huang criticized these restrictions, stating they have inadvertently spurred innovation in China rather than curbing it.

Huang’s statement, “The question is not whether China will have AI, it already does,” underscores his perspective on the global AI landscape. He emphasized that export controls may not prevent technological advancements in China but could instead accelerate domestic innovation. This viewpoint reflects Huang’s broader understanding of the interconnectedness of global technology development and the challenges posed by geopolitical tensions. He followed by stating, “The question is whether one of the world’s largest AI markets will run on American platforms. Shielding Chinese chipmakers from U.S. competition only strengthens them abroad and weakens America’s position.”

Under Huang’s leadership, NVIDIA has not only achieved remarkable financial success but has also been at the forefront of AI and computing innovations. His strategic vision continues to shape the company’s trajectory, navigating complex international dynamics while driving technological progress.

Jensen Huang: Visionary Leader Behind Nvidia

Early Life and Education

Jensen Huang, born in Tainan, Taiwan, in 1963, immigrated to the United States at a young age. He pursued his undergraduate studies in electrical engineering at Oregon State University, earning a Bachelor of Science degree, and later completed a Master of Science in Electrical Engineering at Stanford University. Before founding Nvidia, Huang gained industry experience at LSI Logic and Advanced Micro Devices (AMD), building a foundation in semiconductor technology and business leadership.

Founding Nvidia and Early Struggles

In 1993, at the age of 30, Huang co-founded Nvidia with Chris Malachowsky and Curtis Priem. The company’s inception was humble—its first meetings took place in a local Denny’s restaurant. The early years were marked by intense challenges and uncertainty. Nvidia’s initial focus on graphics accelerator chips nearly led to its demise, with the company surviving on a critical $5 million investment from Sega. By 1997, Nvidia was just a month away from running out of payroll funds before the release of the RIVA 128 chip turned its fortunes around.

Huang’s leadership style was forged in these difficult times. He often reminded his team, “Our company is thirty days from going out of business,” a mantra that underscored the urgency and resilience required to survive in Silicon Valley’s fast-paced environment. Huang has credited these hardships as essential to his growth as a leader and to Nvidia’s eventual success.

Transforming the Tech Landscape

Under Huang’s stewardship, Nvidia pioneered the invention of the Graphics Processing Unit (GPU) in 1999, revolutionizing computer graphics and catalyzing the growth of the PC gaming industry. More recently, Nvidia has become a central player in the rise of artificial intelligence (AI) and accelerated computing, with its hardware and software platforms powering breakthroughs in data centers, autonomous vehicles, and generative AI.

Huang’s vision and execution have earned him widespread recognition, including election to the National Academy of Engineering, the Semiconductor Industry Association’s Robert N. Noyce Award, the IEEE Founder’s Medal, and inclusion in TIME magazine’s list of the 100 most influential people.

read more

Download brochure

Introduction brochure

What we do, case studies and profiles of some of our amazing team.

Download

Our latest podcasts on Spotify

Sign up for our newsletters - free

Global Advisors | Quantified Strategy Consulting