“People shouldn’t put their head in the sand. [AI] is going to affect jobs. Think of every application, every service you do; you’ll be using .. AI – some to enhance it. Some of it will be you doing the same job; you’re doing a better job at it. There will be jobs that are eliminated, but you’re better off being way ahead of the curve.” – Jamie Dimon, CEO JP Morgan Chase
Jamie Dimon delivered these observations on artificial intelligence during an interview with Bloomberg’s Tom Mackenzie in London on 7 October 2025, where he discussed JPMorgan Chase’s decade-long engagement with AI technology and its implications for the financial services sector. His comments reflect both the pragmatic assessment of a chief executive who has committed substantial resources to technological transformation and the broader perspective of someone who has navigated multiple economic cycles throughout his career.
The Context of Dimon’s Statement
JPMorgan Chase has been investing in AI since 2012, well before the recent generative AI explosion captured public attention. The bank now employs 2,000 people dedicated to AI initiatives and spends $2 billion annually on these efforts. This investment has already generated approximately $2 billion in quantifiable benefits, with Dimon characterising this as merely “the tip of the iceberg.” The technology permeates every aspect of the bank’s operations—from risk management and fraud detection to marketing, idea generation and customer service.
What makes Dimon’s warning particularly salient is his acknowledgement that approximately 150,000 JPMorgan employees use the bank’s suite of AI tools weekly. This isn’t theoretical speculation about future disruption; it’s an ongoing transformation within one of the world’s largest financial institutions, with assets of $4.0 trillion. The bank’s approach combines deployment across business functions with what Dimon describes as a cultural shift—managers and leaders are now expected to ask continuously: “What are you doing that we could do to serve your people? Why can’t you do better? What is somebody else doing?”
Dimon’s perspective on job displacement is notably unsentimental whilst remaining constructive. He rejects the notion of ignoring AI’s impact, arguing that every application and service will incorporate the technology. Some roles will be enhanced, allowing employees to perform better; others will be eliminated entirely. His solution centres on anticipatory adaptation rather than reactive crisis management—JPMorgan has established programmes for retraining and redeploying staff. For the bank itself, Dimon envisions more jobs overall if the institution succeeds, though certain functions will inevitably contract.
His historical framing of technological disruption provides important context. Drawing parallels to the internet bubble, Dimon noted that whilst hundreds of companies worth billions collapsed, the period ultimately produced Facebook, YouTube and Google. He applies similar logic to current AI infrastructure spending, which is approaching $1 trillion annually across the sector. There will be “a lot of losers, a lot of winners,” but the aggregate effect will prove productive for the economy.
Jamie Dimon: A Biography
Jamie Dimon has served as Chairman and Chief Executive Officer of JPMorgan Chase since 2006, presiding over its emergence as the leading US bank by domestic assets under management, market capitalisation and publicly traded stock value. Born on 13 March 1956, Dimon’s ascent through American finance has been marked by both remarkable achievements and notable setbacks, culminating in a position where he is widely regarded as the dominant banking executive of his generation.
Dimon earned his bachelor’s degree from Tufts University in 1978 before completing an MBA at Harvard Business School in 1982. His career began with a brief stint as a management consultant at Boston Consulting Group, followed by his entry into American Express, where he worked under the mentorship of Sandy Weill—a relationship that would prove formative. At the age of 30, Dimon was appointed chief financial officer of Commercial Credit, later becoming the firm’s president. This role placed him at the centre of an aggressive acquisition strategy that included purchasing Primerica Corporation in 1987 and The Travelers Corporation in 1993.
From 1990 to 1998, Dimon served as Chief Operating Officer of both Travelers and Smith Barney, eventually becoming Co-Chairman and Co-CEO of the combined brokerage following the 1997 merger of Smith Barney and Salomon Brothers. When Travelers Group merged with Citicorp in 1998 to form Citigroup, Dimon was named president of the newly created financial services giant. However, his tenure proved short-lived; he departed later that year following a conflict with Weill over leadership succession.
This professional setback led to what would become one of the defining chapters of Dimon’s career. In 2000, he was appointed CEO of Bank One, a struggling institution that required substantial turnaround efforts. When JPMorgan Chase merged with Bank One in July 2004, Dimon became president and chief operating officer of the combined entity. He assumed the role of CEO on 1 January 2006, and one year later was named Chairman of the Board.
Under Dimon’s leadership, JPMorgan Chase navigated the 2008 financial crisis with relative success, earning him recognition as one of the few banking chiefs to emerge from the period with an enhanced reputation. As Duff McDonald wrote in his 2009 book “Last Man Standing: The Ascent of Jamie Dimon and JPMorgan Chase,” whilst much of the crisis stemmed from “plain old avarice and bad judgment,” Dimon and JPMorgan Chase “stood apart,” embodying “the values of clarity, consistency, integrity, and courage”.
Not all has been smooth sailing. In May 2012, JPMorgan Chase reported losses of at least $2 billion from trades that Dimon characterised as “flawed, complex, poorly reviewed, poorly executed and poorly monitored”—an episode that became known as the “London Whale” incident and attracted investigations from the Federal Reserve, SEC and FBI. In May 2023, Dimon testified under oath in lawsuits accusing the bank of serving Jeffrey Epstein, the late sex offender who was a client between 1998 and 2013.
Dimon’s political evolution reflects a pragmatic centrism. Having donated more than $500,000 to Democratic candidates between 1989 and 2009 and maintained close ties to the Obama administration, he later distanced himself from strict partisan identification. “My heart is Democratic,” he told CNBC in 2019, “but my brain is kind of Republican.” He primarily identifies as a “capitalist” and a “patriot,” and served on President Donald Trump’s short-lived business advisory council before Trump disbanded it in 2017. Though he confirmed in 2016 that he would “love to be president,” he deemed a campaign “too hard and too late” and ultimately decided against serious consideration of a 2020 run. In 2024, he endorsed Nikki Haley in the Republican primary before speaking more positively about Trump following Haley’s defeat.
As of May 2025, Forbes estimated Dimon’s net worth at $2.5 billion. He serves on the boards of numerous organisations, including the Business Roundtable, Bank Policy Institute and Harvard Business School, whilst also sitting on the executive committee of the Business Council and the Partnership for New York City.
Leading Theorists on AI and Labour Displacement
The question of how artificial intelligence will reshape employment has occupied economists, technologists and social theorists for decades, producing a rich body of work that frames Dimon’s observations within broader academic and policy debates.
John Maynard Keynes introduced the concept of “technological unemployment” in his 1930 essay “Economic Possibilities for our Grandchildren,” arguing that society was “being afflicted with a new disease” caused by “our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour.” Keynes predicted this would be a temporary phase, ultimately leading to widespread prosperity and reduced working hours. His framing established the foundation for understanding technological displacement as a transitional phenomenon requiring societal adaptation rather than permanent catastrophe.
Joseph Schumpeter developed the theory of “creative destruction” in his 1942 work “Capitalism, Socialism and Democracy,” arguing that innovation inherently involves the destruction of old economic structures alongside the creation of new ones. Schumpeter viewed this process as the essential fact about capitalism—not merely a side effect but the fundamental engine of economic progress. His work provides the theoretical justification for Dimon’s observation about the internet bubble: widespread failure and waste can coexist with transformative innovation and aggregate productivity gains.
Wassily Leontief, winner of the 1973 Nobel Prize in Economics, warned in 1983 that workers might follow the path of horses, which were displaced en masse by automobable and tractor technology in the early twentieth century. His input-output economic models attempted to trace how automation would ripple through interconnected sectors, suggesting that technological displacement might be more comprehensive than previous episodes. Leontief’s scepticism about labour’s ability to maintain bargaining power against capital in an automated economy presaged contemporary concerns about inequality and the distribution of AI’s benefits.
Erik Brynjolfsson and Andrew McAfee at MIT have produced influential work on digital transformation and employment. Their 2014 book “The Second Machine Age” argued that we are in the early stages of a transformation as profound as the Industrial Revolution, with digital technologies now able to perform cognitive tasks previously reserved for humans. They coined the term “skill-biased technological change” to describe how modern technologies favour workers with higher levels of education and adaptability, potentially exacerbating income inequality. Their subsequent work on “machine learning” and “AI and the modern productivity paradox” has explored why measured productivity gains have lagged behind apparent technological advances—a puzzle relevant to Dimon’s observation that some AI benefits are difficult to quantify precisely.
Daron Acemoglu at MIT has challenged technological determinism, arguing that the impact of AI on employment depends crucially on how the technology is designed and deployed. In his 2019 paper “Automation and New Tasks: How Technology Displaces and Reinstates Labor” (co-authored with Pascual Restrepo), Acemoglu distinguished between automation that merely replaces human labour and technologies that create new tasks and roles. He has advocated for “human-centric AI” that augments rather than replaces workers, and has warned that current tax structures and institutional frameworks may be biasing technological development towards excessive automation. His work directly addresses Dimon’s categorisation of AI applications: some will enhance existing jobs, others will eliminate them, and the balance between these outcomes is not predetermined.
Carl Benedikt Frey and Michael Osborne at Oxford produced a widely cited 2013 study estimating that 47 per cent of US jobs were at “high risk” of automation within two decades. Their methodology involved assessing the susceptibility of 702 occupations to computerisation based on nine key bottlenecks, including creative intelligence, social intelligence and perception and manipulation. Whilst their headline figure attracted criticism for potentially overstating the threat—since many jobs contain a mix of automatable and non-automatable tasks—their framework remains influential in assessing which roles face displacement pressure.
Richard Freeman at Harvard has explored the institutional and policy responses required to manage technological transitions, arguing that the distribution of AI’s benefits depends heavily on labour market institutions, educational systems and social policy choices. His work emphasises that historical episodes of technological transformation involved substantial political conflict and institutional adaptation, suggesting that managing AI’s impact will require deliberate policy interventions rather than passive acceptance of market outcomes.
Shoshana Zuboff at Harvard Business School has examined how digital technologies reshape not merely what work is done but how it is monitored, measured and controlled. Her concept of “surveillance capitalism” highlights how data extraction and algorithmic management may fundamentally alter the employment relationship, potentially creating new forms of workplace monitoring and performance pressure even for workers whose jobs are augmented rather than eliminated by AI.
Klaus Schwab, founder of the World Economic Forum, has framed current technological change as the “Fourth Industrial Revolution,” characterised by the fusion of technologies blurring lines between physical, digital and biological spheres. His 2016 book of the same name argues that the speed, scope and systems impact of this transformation distinguish it from previous industrial revolutions, requiring unprecedented coordination between governments, businesses and civil society.
The academic consensus, insofar as one exists, suggests that AI will indeed transform employment substantially, but that the nature and distributional consequences of this transformation remain contested and dependent on institutional choices. Dimon’s advice to avoid “putting your head in the sand” and to stay “way ahead of the curve” aligns with this literature’s emphasis on anticipatory adaptation. His commitment to retraining and redeployment echoes the policy prescriptions of economists who argue that managing technological transitions requires active human capital investment rather than passive acceptance of labour market disruption.
What distinguishes Dimon’s perspective is his position as a practitioner implementing these technologies at scale within a major institution. Whilst theorists debate aggregate employment effects and optimal policy responses, Dimon confronts the granular realities of deployment: which specific functions can be augmented versus automated, how managers adapt their decision-making processes, what training programmes prove effective, and how to balance efficiency gains against workforce morale and capability retention. His assertion that JPMorgan has achieved approximately $2 billion in quantifiable benefits from $2 billion in annual AI spending—whilst acknowledging additional unquantifiable improvements—provides an empirical data point for theories about AI’s productivity impact.
The ten-year timeframe of JPMorgan’s AI journey also matters. Dimon’s observation that “people think it’s a new thing” but that the bank has been pursuing AI since 2012 challenges narratives of sudden disruption, instead suggesting a more gradual but accelerating transformation. This accords with Brynjolfsson and McAfee’s argument about the “productivity J-curve”—that the full economic benefits of transformative technologies often arrive with substantial lag as organisations learn to reconfigure processes and business models around new capabilities.
Ultimately, Dimon’s warning about job displacement, combined with his emphasis on staying ahead of the curve through retraining and redeployment, reflects a synthesis of Schumpeterian creative destruction, human capital theory, and practical experience managing technological change within a complex organisation. His perspective acknowledges both the inevitability of disruption and the possibility of managing transitions to benefit both institutions and workers—provided leadership acts proactively rather than reactively. For financial services professionals and business leaders more broadly, Dimon’s message is clear: AI’s impact on employment is neither hypothetical nor distant, but rather an ongoing transformation requiring immediate and sustained attention.