Select Page

“Let your agent handle the predictions, but you, as the human, must stay unpredictable. You have to live out loud at your highest vibration.” – will.i.am – Artist and CEO, FYI.AI

In an era when artificial intelligence increasingly handles data analysis, pattern recognition, and predictive modelling, will.i.am’s assertion that humans must remain unpredictable strikes at the heart of a fundamental question: what uniquely human capacities will matter most as AI systems become more capable?

will.i.am, the Grammy-winning artist, producer, and entrepreneur who founded FYI.AI, articulated this philosophy during the “When Code and Creativity Collide” session at the World Economic Forum’s 2026 annual meeting in Davos. His statement reflects a growing recognition among technology leaders and creative professionals that the future of work will not be defined by humans competing with machines on tasks of prediction and calculation, but rather by humans excelling at what machines cannot easily replicate: originality, emotional resonance, and the capacity to surprise.

The Context: AI Autonomy and Human Agency

The timing of will.i.am’s remarks is significant. At Davos 2026, the central preoccupation among technologists, policymakers, and business leaders was the question of human control as AI systems gain greater autonomy. Yuval Noah Harari, the historian and Distinguished Research Fellow at the Centre for the Study of Existential Risk, posed the essential question: “Can humans stay meaningfully in control as AI autonomy increases?” His answer was characteristically sobering: “maybe.”1

This uncertainty reflects a genuine inflection point. Current AI systems excel at processing vast datasets, identifying patterns, and making predictions based on historical information. They are, in essence, sophisticated extrapolation machines. Yet this very capability-the ability to predict outcomes with increasing accuracy-creates a paradox for human purpose. If machines can predict what will happen next, what role remains for human intuition, creativity, and agency?

will.i.am’s answer is deceptively simple: humans must become the variable that cannot be predicted. Rather than attempting to outthink AI at its own game, humans should lean into the one domain where unpredictability is not a flaw but a feature-the realm of creative expression, cultural innovation, and what he terms “living out loud at your highest vibration.”

The Philosophical Underpinning: Creativity as Irreducible Human Value

This perspective aligns with emerging consensus among leading AI researchers and theorists about the nature of intelligence itself. Eric Xing, President of the Mohamed Bin Zayed University of Artificial Intelligence, challenged the assumption that current AI systems represent genuine intelligence at all. “What I’m delivering is a limited form of intelligence,” he stated at Davos, emphasising that today’s large language models and neural networks deliver “a narrow, language-based capability.”1 True progress, Xing argued, would require fundamentally new architectures and eventually forms of physical and social intelligence-domains where human embodied experience and emotional understanding remain irreplaceable.

Yoshua Bengio, the Full Professor at the University of Montreal and one of the pioneers of deep learning, raised a complementary concern: current AI systems are trained to imitate humans too closely, including humanity’s worst tendencies. “It’s a misnomer,” he argued, “to want AI to be like us.”1 This observation suggests that the path forward is not to make machines more human, but to allow humans to be more fully human-to embrace the qualities that distinguish human consciousness and creativity from machine learning.

Harari crystallised this insight with characteristic wit: “Human intelligence is a ridiculous analogy. AI will never be like humans, just as aeroplanes are not birds.”1 The implication is profound. Just as aeroplanes succeeded not by mimicking bird flight but by discovering entirely different principles of aerodynamics, human value in an age of AI will not come from competing with machines on their terms, but from operating in domains where human uniqueness is the competitive advantage.

The Challenge: Disruption and Displacement

Yet will.i.am’s optimistic framing must be situated within a broader context of genuine concern about AI’s disruptive potential. Bill Gates, in his assessment of the year ahead, identified two major challenges: “use of AI by bad actors and disruption to the job market.”2 Both are real risks that require deliberate governance and preparation.

The job market disruption is particularly acute. At Davos, the “Workers in the Driver’s Seat” session highlighted a critical tension: whilst 83 per cent of workers want to take control of their skills development and remain relevant for jobs of the future, many companies underestimate this appetite and fail to include workers meaningfully in the design of AI systems that will reshape their roles.1 Denis Machuel, speaking at the forum, emphasised that “if we want peaceful societies, we have to ensure social cohesion” and that AI “does not happen to people”-rather, people must be involved in shaping how these systems are deployed.1

This is where will.i.am’s philosophy becomes not merely aspirational but practically necessary. If AI will inevitably automate many forms of predictable, routine work, then the human workforce must be equipped and encouraged to develop precisely those capacities that machines cannot easily replicate: creative problem-solving, emotional intelligence, cultural production, and the kind of originality that emerges from living authentically and at “your highest vibration.”

The Theorists: Reimagining Human Capital

The intellectual foundations for this perspective extend beyond the immediate AI debate. The concept of human capital-the idea that human skills, knowledge, and creativity are economic assets-has been central to economic theory since the work of Gary Becker in the 1960s. However, the nature of what constitutes valuable human capital is being fundamentally reconceived.

In the context of AI advancement, theorists are increasingly distinguishing between two categories of human capability: those that are automatable (routine cognitive tasks, data processing, pattern matching) and those that are not (creative synthesis, ethical judgment, emotional resonance, cultural meaning-making). The economist and policy theorist Daron Acemoglu has argued that technological progress is not inevitable or neutral; societies must make deliberate choices about which technologies to develop and deploy. The choice to develop AI systems that augment human creativity rather than simply replace human labour is a choice, not a foregone conclusion.

Similarly, the organisational theorist Yejin Choi, a Professor and Senior Fellow at Stanford University who participated in the Davos AI autonomy debate, has emphasised the importance of human values and social intelligence in shaping how AI systems are designed and deployed.1 Her work suggests that the future of human-AI collaboration depends not on humans becoming more like machines, but on machines being designed with greater sensitivity to human values, social context, and the irreducible complexity of human flourishing.

Living Out Loud: The Practical Imperative

will.i.am’s injunction to “live out loud at your highest vibration” is thus not merely motivational rhetoric. It is a strategic imperative in an economy increasingly shaped by AI. The specific, the idiosyncratic, the culturally rooted, the emotionally authentic-these become sources of competitive advantage precisely because they are difficult to systematise, predict, or automate.

This has profound implications for education, organisational culture, and economic policy. If unpredictability and authentic self-expression are valuable, then educational systems must shift from emphasising conformity and standardised performance toward cultivating individuality, creative risk-taking, and the courage to deviate from established patterns. Organisations must create space for the kind of experimentation and failure that generates genuine novelty. And policymakers must ensure that the transition to an AI-augmented economy does not simply displace workers into precarity, but actively invests in developing the creative and social capacities that will define human value.

The irony is elegant: in an age of unprecedented computational power and predictive capability, human success increasingly depends on becoming less predictable, not more. The machine learns to anticipate; the human learns to surprise. The algorithm optimises for consistency; the creative professional thrives on variation. The AI agent handles the predictions; the human handles the possibilities.

This reframing does not eliminate the genuine risks that Gates, Harari, and others have identified. But it suggests a path forward that is neither Luddite rejection of AI nor passive acceptance of technological determinism. Instead, it is an active choice to define human value not in opposition to machines, but in complementarity with them-with humans deliberately cultivating the capacities that machines cannot replicate, and machines handling the domains where they excel. In this division of labour, unpredictability is not a liability. It is the essence of what makes us human.

 

References

1. https://www.weforum.org/stories/2026/01/live-from-davos-2026-what-to-know-on-day-2/

2. https://www.gatesnotes.com/work/accelerate-energy-innovation/reader/the-year-ahead-2026

3. https://www.youtube.com/watch?v=QIxXp7f8Eag

4. https://www.weforum.org/stories/2026/01/davos-2026-how-middle-powers-are-reading-the-global-moment/

5. https://www.bigissue.com/opinion/mark-carney-big-issue-davos-speech/

 

Download brochure

Introduction brochure

What we do, case studies and profiles of some of our amazing team.

Download

Our latest podcasts on Spotify
Global Advisors | Quantified Strategy Consulting