Select Page

Global Advisors | Quantified Strategy Consulting

trust
Quote: Fei-Fei Li – Godmother of AI

Quote: Fei-Fei Li – Godmother of AI

“In the AI age, trust cannot be outsourced to machines. Trust is fundamentally human. It’s at the individual level, community level, and societal level.” – Fei-Fei Li – Godmother of AI

The Quote and Its Significance

This statement encapsulates a profound philosophical stance on artificial intelligence that challenges the prevailing techno-optimism of our era. Rather than viewing AI as a solution to human problems-including the problem of trust itself-Fei-Fei Li argues for the irreducible human dimension of trust. In an age where algorithms increasingly mediate our decisions, relationships, and institutions, her words serve as a clarion call: trust remains fundamentally a human endeavour, one that cannot be delegated to machines, regardless of their sophistication.

Who Is Fei-Fei Li?

Fei-Fei Li stands as one of the most influential voices in artificial intelligence research and ethics today. As co-director of Stanford’s Institute for Human-Centered Artificial Intelligence (HAI), founded in 2019, she has dedicated her career to ensuring that AI development serves humanity rather than diminishes it. Her influence extends far beyond academia: she was appointed to the United Nations Scientific Advisory Board, named one of TIME’s 100 Most Influential People in AI, and has held leadership roles at Google Cloud and Twitter.

Li’s most celebrated contribution to AI research is the creation of ImageNet, a monumental dataset that catalysed the deep learning revolution. This achievement alone would secure her place in technological history, yet her impact extends into the ethical and philosophical dimensions of AI development. In 2024, she co-founded World Labs, an AI startup focused on spatial intelligence systems designed to augment human capability-a venture that raised $230 million and exemplifies her commitment to innovation grounded in ethical principles.

Beyond her technical credentials, Li co-founded AI4ALL, a non-profit organisation dedicated to promoting diversity and inclusion in the AI sector, reflecting her conviction that AI’s future must be shaped by diverse voices and perspectives.

The Core Philosophy: Human-Centred AI

Li’s assertion about trust emerges from a broader philosophical framework that she terms human-centred artificial intelligence. This approach fundamentally rejects the notion that machines should replace human judgment, particularly in domains where human dignity, autonomy, and values are at stake.

In her public statements, Li has articulated a concern that resonates throughout her work: the language we use about AI shapes how we develop and deploy it. She has expressed deep discomfort with the word “replace” when discussing AI’s relationship to human labour and capability. Instead, she advocates for framing AI as augmenting or enhancing human abilities rather than supplanting them. This linguistic shift reflects a philosophical commitment: AI should amplify human creativity and ingenuity, not reduce humans to mere task-performers.

Her reasoning is both biological and existential. As she has explained, humans are slower runners, weaker lifters, and less capable calculators than machines-yet “we are so much more than those narrow tasks.” To allow AI to define human value solely through metrics of speed, strength, or computational power is to fundamentally misunderstand what makes us human. Dignity, creativity, moral judgment, and relational capacity cannot be outsourced to algorithms.

The Trust Question in Context

Li’s statement about trust addresses a critical vulnerability in contemporary society. As AI systems increasingly mediate consequential decisions-from healthcare diagnoses to criminal sentencing, from hiring decisions to financial lending-society faces a temptation to treat these systems as neutral arbiters. The appeal is understandable: machines do not harbour conscious bias, do not tire, and can process vast datasets instantaneously.

Yet Li’s insight cuts to the heart of a fundamental misconception. Trust, in her formulation, is not merely a technical problem to be solved through better algorithms or more transparent systems. Trust is a social and moral phenomenon that exists at three irreducible levels:

  • Individual level: The personal relationships and judgments we make about whether to rely on another person or institution
  • Community level: The shared norms and reciprocal commitments that bind groups together
  • Societal level: The institutional frameworks and collective agreements that enable large-scale cooperation

Each of these levels involves human agency, accountability, and the capacity to be wronged. A machine cannot be held morally responsible; a human can. A machine cannot understand the context of a community’s values; a human can. A machine cannot participate in the democratic deliberation necessary to shape societal institutions; a human must.

Leading Theorists and Related Intellectual Traditions

Li’s thinking draws upon and contributes to several important intellectual traditions in philosophy, ethics, and social theory:

Human Dignity and Kantian Ethics

At the philosophical foundation of Li’s work lies a commitment to human dignity-the idea that humans possess intrinsic worth that cannot be reduced to instrumental value. This echoes Immanuel Kant’s categorical imperative: humans must never be treated merely as means to an end, but always also as ends in themselves. When AI systems reduce human workers to optimisable tasks, or when algorithmic systems treat individuals as data points rather than moral agents, they violate this fundamental principle. Li’s insistence that “if AI applications take away that sense of dignity, there’s something wrong” is fundamentally Kantian in its ethical architecture.

Feminist Technology Studies and Care Ethics

Li’s emphasis on relationships, context, and the irreducibility of human judgment aligns with feminist critiques of technology that emphasise care, interdependence, and situated knowledge. Scholars in this tradition-including Donna Haraway, Lucy Suchman, and Safiya Noble-have long argued that technology is never neutral and that the pretence of objectivity often masks particular power relations. Li’s work similarly insists that AI development must be grounded in explicit values and ethical commitments rather than presented as value-neutral problem-solving.

Social Epistemology and Trust

The philosophical study of trust has been enriched in recent decades by work in social epistemology-the study of how knowledge is produced and validated collectively. Philosophers such as Miranda Fricker have examined how trust is distributed unequally across society, and how epistemic injustice occurs when certain voices are systematically discredited. Li’s emphasis on trust at the community and societal levels reflects this sophisticated understanding: trust is not a technical property but a social achievement that depends on fair representation, accountability, and recognition of diverse forms of knowledge.

The Ethics of Artificial Intelligence

Li contributes to and helps shape the emerging field of AI ethics, which includes thinkers such as Stuart Russell, Timnit Gebru, and Kate Crawford. These scholars have collectively argued that AI development cannot be separated from questions of power, justice, and human flourishing. Russell’s work on value alignment-ensuring that AI systems pursue goals aligned with human values-provides a technical framework for the philosophical commitments Li articulates. Gebru and Crawford’s work on data justice and algorithmic bias demonstrates how AI systems can perpetuate and amplify existing inequalities, reinforcing Li’s conviction that human oversight and ethical deliberation remain essential.

The Philosophy of Technology

Li’s thinking also engages with classical philosophy of technology, particularly the work of thinkers like Don Ihde and Peter-Paul Verbeek, who have argued that technologies are never mere tools but rather reshape human practices, relationships, and possibilities. The question is not whether AI will change society-it will-but whether that change will be guided by human values or will instead impose its own logic upon us. Li’s advocacy for light-handed, informed regulation rather than heavy-handed top-down control reflects a nuanced understanding that technology development requires active human governance, not passive acceptance.

The Broader Context: AI’s Transformative Power

Li’s emphasis on trust must be understood against the backdrop of AI’s extraordinary transformative potential. She has stated that she believes “our civilisation stands on the cusp of a technological revolution with the power to reshape life as we know it.” Some experts, including AI researcher Kai-Fu Lee, have argued that AI will change the world more profoundly than electricity itself.

This is not hyperbole. AI systems are already reshaping healthcare, scientific research, education, employment, and governance. Deep neural networks have demonstrated capabilities that surprise even their creators-as exemplified by AlphaGo’s unexpected moves in the ancient game of Go, which violated centuries of human strategic wisdom yet proved devastatingly effective. These systems excel at recognising patterns that humans cannot perceive, at scales and speeds beyond human comprehension.

Yet this very power makes Li’s insistence on human trust more urgent, not less. Precisely because AI is so powerful, precisely because it operates according to logics we cannot fully understand, we cannot afford to outsource trust to it. Instead, we must maintain human oversight, human accountability, and human judgment at every level where AI affects human lives and communities.

The Challenge Ahead

Li frames the challenge before us as fundamentally moral rather than merely technical. Engineers can build more transparent algorithms; ethicists can articulate principles; regulators can establish guardrails. But none of these measures can substitute for the hard work of building trust-at the individual level through honest communication and demonstrated reliability, at the community level through inclusive deliberation and shared commitment to common values, and at the societal level through democratic institutions that remain responsive to human needs and aspirations.

Her vision is neither techno-pessimistic nor naïvely optimistic. She does not counsel fear or rejection of AI. Rather, she advocates for what she calls “very light-handed and informed regulation”-guardrails rather than prohibition, guidance rather than paralysis. But these guardrails must be erected by humans, for humans, in service of human flourishing.

In an era when trust in institutions has eroded-when confidence in higher education, government, and media has declined precipitously-Li’s message carries particular weight. She acknowledges the legitimate concerns about institutional trustworthiness, yet argues that the solution is not to replace human institutions with algorithmic ones, but rather to rebuild human institutions on foundations of genuine accountability, transparency, and commitment to human dignity.

Conclusion: Trust as a Human Responsibility

Fei-Fei Li’s statement that “trust cannot be outsourced to machines” is ultimately a statement about human responsibility. In the age of artificial intelligence, we face a choice: we can attempt to engineer our way out of the messy, difficult work of building and maintaining trust, or we can recognise that trust is precisely the work that remains irreducibly human. Li’s life’s work-from ImageNet to the Stanford HAI Institute to World Labs-represents a sustained commitment to the latter path. She insists that we can harness AI’s extraordinary power whilst preserving what makes us human: our capacity for judgment, our commitment to dignity, and our ability to trust one another.

References

1. https://www.hoover.org/research/rise-machines-john-etchemendy-and-fei-fei-li-our-ai-future

2. https://economictimes.com/magazines/panache/stanford-professor-calls-out-the-narrative-of-ai-replacing-humans-says-if-ai-takes-away-our-dignity-something-is-wrong/articleshow/122577663.cms

3. https://www.nisum.com/nisum-knows/top-10-thought-provoking-quotes-from-experts-that-redefine-the-future-of-ai-technology

4. https://www.goodreads.com/author/quotes/6759438.Fei_Fei_Li

"In the AI age, trust cannot be outsourced to machines. Trust is fundamentally human. It’s at the individual level, community level, and societal level." - Quote: Fei-Fei Li

read more
Quote: Henry Joseph-Grant – Just-Eat founder

Quote: Henry Joseph-Grant – Just-Eat founder

“Ultimately an investment is an instrument of trust as much as it is of belief. Every single part of your strategy is showing you’re accountable and understand your responsibility with that. Take ownership.” – Henry Joseph-Grant – Just-Eat founder

Henry Joseph-Grant is widely recognised as a leading figure in the tech entrepreneurship and investment space. His career exemplifies the journey from humble beginnings to achieving major influence across international markets. Raised in Northern Ireland, Joseph-Grant’s academic pursuit in Arabic at the University of Westminster equipped him for the global business landscape, notably in his advisory work in Dubai. He began working early—starting as a paperboy at 11 and moving into various sales roles, before a pivotal tenure with Virgin.

His operational calibre was cemented by his contribution to scaling JUST EAT from its UK startup phase to its landmark IPO, which resulted in a £5.25bn market capitalisation. He subsequently founded The Entertainer in partnership with Abraaj Capital, and has held senior leadership roles (Director, VP, C-level) at disruptive technology firms.

Henry’s perspective is shaped by deep, hands-on engagement: navigating companies through crises, managing dramatic operational turnarounds, and leading restructuring efforts during economic shocks such as the pandemic. His experience includes acting as an angel investor, mentoring CEOs (at Seedcamp, Pitch@Palace, PiLabs) and judging major entrepreneur competitions including Richard Branson’s VOOM Pitch to Rich. Recognised among the top 25 UK entrepreneurs by Smith & Williamson, Henry is committed to fostering new generations of innovators and business leaders.

Context of the Quote

The quote captures Joseph-Grant’s core philosophy: in both entrepreneurship and investment, trust is as fundamental as belief or analytical conviction. Strategy is not simply a matter of tactics; it is a public demonstration of accountability and stewardship for others’ capital—be that from shareholders, employees, or the wider community. Trust is built through transparent, consistent ownership of outcomes, both positive and negative. This philosophy became especially salient in his leadership during industry crises, where he led teams through abrupt, challenging change, instilling a culture of responsibility and resilience.

Relevant Theorists and Thought Leaders

Joseph-Grant’s worldview aligns with and extends a body of thinking on trust, accountability, and stewardship within investment and leadership circles:

  • Peter L. Bernstein (1919-2009), author of “Against the Gods: The Remarkable Story of Risk”, argued that all investment is a decision under uncertainty, underpinned by belief and the trustworthiness of those managing risk and capital. Bernstein traced the intellectual roots of taking and managing risk back to early insurance and probability theory, highlighting the psychological dimensions of trust inherent in capital allocation.

  • Warren Buffett, considered the most successful investor of the modern era, has consistently emphasised the interplay between trust, character, and performance in capital deployment. His letters to Berkshire Hathaway shareholders stress that he seeks partners and managers who will act as if all company actions are subject to public scrutiny—a direct echo of Joseph-Grant’s call for ownership and accountability.

  • Michael C. Jensen (emeritus professor, Harvard Business School) and William H. Meckling pioneered the concept of agency theory, which analyses the relationship between principals (investors) and agents (managers). Their analysis showed how trust and proper alignment of incentives are essential to guarding against opportunism and ensuring responsible stewardship.

  • Charles Handy, the UK management thinker, championed the “trust economy”, where intangible trust stocks often surpass formal contracts in their influence over business outcomes. Handy’s reflections on responsibility-through-action parallel Joseph-Grant’s insistence that strategy is not just a plan, but an ongoing display of stewardship.

  • Annette Mikes and Robert S. Kaplan (Harvard Business School) have explored risk leadership, demonstrating that trust is central to effective risk management; without authentic ownership from the top, frameworks fail.

 

Each of these theorists recognised that trust is not a soft attribute, but a measurable, actionable asset—and its absence carries material risk. Joseph-Grant’s phrasing highlights the imperative for every leader, founder, and investor: take ownership is not a cliché, but a competitive advantage and ethical responsibility.

Summary of Influence

The philosophy embedded in the quote is founded on Joseph-Grant’s lived experience, informed by crisis-tested leadership across markets and sectors. It reflects a broader intellectual tradition where trust, strategic clarity, and personal accountability are the cornerstones of sustainable investment and entrepreneurship. The challenge—and opportunity—posed is clear: in today’s interconnected, high-stakes environment, belief and trust are inseparable from value creation. Success follows when leaders are visibly accountable for the trust placed in them, at every level of the strategy.

read more
Quote: Ralph Waldo Emerson

Quote: Ralph Waldo Emerson

“The glory of friendship is not the outstretched hand, not the kindly smile, nor the joy of companionship; it is the spiritual inspiration that comes to one when you discover that someone else believes in you and is willing to trust you with a friendship.”

Ralph Waldo Emerson

read more

Download brochure

Introduction brochure

What we do, case studies and profiles of some of our amazing team.

Download

Our latest podcasts on Spotify

Sign up for our newsletters - free

Global Advisors | Quantified Strategy Consulting