Select Page

23 Jan 2026 | 0 comments

The era of speculative fervour, characterised by abstract debates over Artificial General Intelligence and the “existential” threats of the technology, has been largely superseded by a grounded, operational focus on pragmatic implementation.

The 2026 World Economic Forum at Davos marks a definitive shift in the global macro-environment. The speculative “noise” surrounding Artificial General Intelligence (AGI) and bubble anxieties has been replaced by a focused strategic “signal”: the industrialisation of intelligence. Leaders have realised that technology leadership is no longer a corporate elective, but the primary determinant of national power and economic security.

1. The Shift from Speculation to Pragmatism

The era of speculative fervour, characterised by abstract debates over Artificial General Intelligence and the “existential” threats of the technology, has been largely superseded by a grounded, operational focus on pragmatic implementation. While the “flashy pavilions” of major tech firms continue to capture media headlines with visions of a frictionless future, the critical work is happening in the secondary forums and technical sessions where the focus is on the “how” of deployment rather than the “what” of potentiality. This transition marks the end of the experimentation phase and the beginning of a rigorous “AI reckoning”, where return on investment, operational resilience, and the systemic integration of agentic systems are the primary metrics of success.

Central to this new paradigm is the realisation that AI success is not a binary outcome of software adoption, but a complex, multi-layered achievement. This architecture is best understood through the five-layer infrastructure stack articulated by Jensen Huang, which identifies energy as the foundational bedrock, followed by silicon, cloud infrastructure, foundational models, and the application layer. However, a critical sixth layer has emerged in the strategies of the world’s most agile economies: user readiness. While the West often treats AI adoption as a top-down technological infusion, nations such as China and the United Arab Emirates are pursuing a “diffusion strategy” that prioritises the readiness of the human component through massive educational reform and the embedding of AI into daily life.

Success is no longer measured by software pilot programmes, but by the ability to master a complex, multi-layered industrial challenge. Ground-truth examples of this pragmatic transition include:

  • The UAE’s K-12 AI Mandate: The first public school system to move beyond digital literacy into a “living curriculum” for students from Kindergarten through Grade 12.
  • China’s Diffusion Strategy: A high-velocity approach prioritising the application of AI across its manufacturing base and achieving dominance in open-source velocity.
  • The Agentic Pivot: Figures like will.i.am and Stephen Bartlett are demonstrating how AI agents serve as foundational partners in creative scaling and business operations.

Despite accelerated pragmatic use, members of the Global South — and indeed broad society — face a potentially widening digital chasm. To understand why the Global South faces a potential “divide” and why certain private actors are moving with such velocity, one must first dissect the physical and logical layers that enable AI to function at scale. Jensen Huang’s conceptualisation of the AI stack provides a rigorous framework for identifying the bottlenecks that currently define global competition.

To navigate this era in a business context, executives must transition from seeing AI as a standalone tool to mastering the five-layer architectural stack on which the new economy is built — and accelerate AI diffusion across their businesses, customers, and supply chains.

In 2026, the bottom of the stack lit up red. Energy is the constraint. The demand curve for compute is colliding with ageing electrical infrastructure and slow-moving permitting systems. The emerging solution is “bring your own power”: co-locate data centres with energy sources (nuclear, hydro, fuel cells), bypass the grid bottleneck, and avoid multi-year queues. That changes where competitive advantage can sit—not just between companies, but between regions and nations. It also intensifies the financial risk. The capital being deployed is enormous—concrete, GPUs, data centres, power. But the cash flow from scaled agentic workflows is still catching up. That gap is where bubble anxieties thrive: investment is front-loaded, while monetisation lags.

Software is no longer a tool you use; it becomes something like a co-worker you manage. This drives a board-level question: what parts of the business are we comfortable delegating to automated decision-making, and under what controls?

Bottom-up adoption has hit its limit. Two years of “let a thousand flowers bloom” delivered incremental gains—often 5% to 10% efficiency improvements—but not the step-change outcomes required to justify escalating investment levels.

The argument at Davos was that real returns come from top-down workflow redesign. The example used was a loan approval process: giving a loan officer AI might save minutes; redesigning the full workflow around agents could compress approval cycles from days to minutes, with humans only stepping in for oversight and final accountability. That is not a marginal improvement. It is a different operating model.

This is the point where AI stops being “an IT project” and becomes a restructuring programme. The promise is significant productivity; the cost is organisational disruption.

The conversation also sharpened around ownership of intelligence. The logic is simple: if you rely on generic models through an API, you are effectively renting intelligence. You gain capability, but you don’t capture the value of the firm’s tacit knowledge: the internal heuristics, “how we do things here”, and the accumulated judgment embedded in teams and culture. The risk is not abstract. If you fail to encode proprietary cognition into systems you control, you leak enterprise value to vendors and, in effect, fund the development of your future competitors.

This is why “sovereignty of the firm” emerged as a serious theme. The new moat is not software alone. It is proprietary cognition: internal knowledge and decision logic captured in models, workflows, and governance that the company owns, controls, and continuously improves.

2. The Five-Layer Stack

The 2026 summit has clarified that AI is not a monolithic event, but a “five-layer cake”, as described by NVIDIA’s Jensen Huang. For the global executive, this stack represents the new competitive theatre. Understanding where your firm or nation sits in this stack determines long-term viability.

The AI Industrial Stack

Layer Component Competitive State & Strategic Context
Layer 1 Energy The foundational bottleneck. China currently possesses twice the energy capacity of the US, where ageing grids face severe “pacing” constraints.
Layer 2 Chips & Systems The hardware engine. The US remains “generations ahead”, with high-end GPUs serving as a core national security platform.
Layer 3 Infrastructure “Land, power, and shell.” This includes the CUDA architecture, the proprietary software “language” that acts as a primary moat, making hardware usable for industry-specific utility.
Layer 4 AI Models The intelligence layer. A clear divide has emerged: the US maintains dominance in Frontier Models (GPT, Claude), while China leads in global Open-Source proliferation.
Layer 5 Applications The end-use layer where industry-specific transformation occurs, moving from “chat” to autonomous diagnostics, self-driving, and reasoning-based manufacturing.

While this stack provides the blueprint for innovation, the entire architecture is currently threatened by an irreconcilable conflict between digital-age demand and mechanical-age infrastructure.

3. Energy and the Infrastructure Gap

Energy has emerged as the “pacing problem” for the AI revolution. As Bloom Energy’s KR Sridhar noted, “The grid speaks English, the chips speak Japanese.” Our mechanical-age power grids are fundamentally incompatible with digital-age chips; a single AI supercomputer requires 200.000 watts, yet the throughput of the grid cannot be expanded fast enough to keep pace with demand.

Energy has transitioned from a technical utility to a primary matter of national security and the ultimate constraint on the AI revolution. The power demands of massive language model training and the subsequent “inference” required for daily operations are colliding with ageing electrical grids and a global shortage of sustainable energy sources. In the United States and Europe, the exploding demand from data centres is already impacting electricity prices and testing the limits of civil infrastructure.

Energy Metric and Status (2026) Value Impact on AI Deployment
Organisations reducing AI energy footprint 93% Shift towards “green AI” and efficient hardware.
Infrastructure sitting idle while consuming power 65% Significant operational waste and ROI drag.
Energy/Cooling cited as top inefficiency 47% Limits deployment before compute capacity is reached.
China’s energy capacity vs United States ~200% China holds a foundational advantage in the lowest layer.

The disparity in energy capacity is particularly stark when comparing China to the West. Huang notes that China possesses roughly twice the energy capacity of the United States, a factor that allows for sustained growth of AI clusters without the immediate threat of grid collapse that haunts many Western municipalities. For the Global South, particularly Africa, this layer represents the most significant barrier to entry. Without a stable and affordable energy base, these regions are forced to remain consumers of AI rather than producers of the technology.

Strategic responses are shifting towards a “Nuclear Renaissance” and molecule-based energy solutions. To overcome cost overruns, the industry is adopting the “Airbus Model” of standardisation — moving away from bespoke engineering towards the industrial replication of identical units. Success here relies on a strict “No Changes” rule, where engineering teams are prohibited from altering designs once proven. Firms are increasingly bypassing the grid entirely, utilising high-efficiency fuel cells for on-site, solid-state power generation.

There is a growing infrastructure divide between the AI “haves” and “have-nots”. Without resolved access to firm energy and technology, much of the Global South — particularly Africa — faces a “Silicon Wall”. If technology leadership and energy infrastructure are not exported, the result will be a bifurcated global economy that undermines international security.

While infrastructure is the engine, people are the operators — and they are undergoing a massive Talent Asset Revaluation.

4. The Sixth Layer: User Readiness and the Diffusion Strategy

While Jensen Huang’s stack identifies the technical prerequisites for AI, pragmatic action in China and the UAE reveals a “hidden” sixth layer: user readiness. The thesis emerging from these regions is that AI success is not a function of the most powerful model, but of the most prepared population. This diffusion strategy focuses on embedding AI literacy into the fabric of society, ensuring that from the CEO to the primary school student, the logic of machine intelligence is understood and utilised.

The labour market is the most volatile component of the AI transition. To manage this, leaders must distinguish between a “task” and a “job”. In radiology, AI has automated the task of scanning, but the job — diagnosing disease and patient care — remains human-centric and enhanced.

Kristalina Georgieva of the IMF provided a stark assessment: “Is the labour market ready [for AI]? The honest answer is no. Our study shows that already in advanced economies, one in ten jobs require new skills.”

To bridge this gap, HR leaders are adopting the T-shaped model:

  1. Breadth: AI-enabled literacy across all functions.
  2. Depth: Specialised domain expertise that cannot be automated.

Strategic talent mandates now include:

  • AI Literacy as Mandatory: Following the UAE model, education must move from rote learning to critical analysis of AI bias.
  • Coding as a Durable Skill: As Andrew Ng emphasised, “everyone must code” does not mean everyone becomes an engineer; it means every professional uses AI to build their own personalised software to automate their specific workflows.
  • Meta-Cognitive Agility: Moving from perishable technical skills to the “learning to learn” imperative.

The UAE: Care Before Cure and Primary Education as a Strategic Moat

The United Arab Emirates is demonstrating how policy can accelerate adoption.

  • Education Mandate: The UAE has become the first public school system globally to mandate an AI literacy curriculum from Kindergarten through Grade 12. This is not just technical training; it teaches students critical thinking — how to analyse AI output for bias and ethics.
  • Healthcare Infrastructure: In Abu Dhabi, the focus has shifted from “sick care” to “care before cure”. By integrating data across the entire population, they have changed screening policies (for example, breast cancer screening starting at age 30 instead of 40), achieving 85% early detection rates.

The United Arab Emirates has introduced AI literacy as a formal subject across all grades, from Kindergarten to Grade 12, starting in the 2025–2026 academic year. This is not a superficial “coding” class; it is a comprehensive curriculum designed around seven key pillars intended to produce “AI-native” citizens.

UAE AI Curriculum Pillars (K–12) Focus Area Intended Outcome
Fundamental Concepts Basic logic of machine learning Demystifying the “black box” of AI.
Data and Algorithms How information is processed Understanding the “ingredients” of intelligence.
Software Usage Hands-on tool integration Practical productivity in daily life.
Ethical Awareness Bias, privacy, and safety Responsible deployment and governance.
Real-World Applications Case studies in industry Bridging the gap between school and labour.
Innovation and Design Project-based learning Creating rather than just consuming AI.
Policy and Engagement Community and global impact Understanding the broader societal shift.

The UAE’s pragmatism is further evidenced by its training of 1.000 teachers to deliver this curriculum, ensuring that the human infrastructure of the education system is as modern as its digital infrastructure. By making AI a core literacy, the UAE is preparing its youth for an economy where machine collaboration is as fundamental as reading or mathematics.

China: Scaling AI through National Digital Infrastructure

China has explicitly shifted its focus from AGI research to a diffusion strategy known as “AI Plus”. The objective is not just high-level compute, but the penetration of AI into the real economy.

  • The Metric: The government has set a goal for the diffusion rate of AI agents and intelligent terminals to exceed 70% by 2027 and 90% by 2030.
  • Education: China is embedding AI literacy starting in primary schools, moving beyond simple usage to integrating thousands of AI agents within universities to assist professors and students.
  • Integration: Industry leaders like Tencent are seeing AI integrated into every function, from automated coding to drug discovery, driving tangible ROI rather than speculative value.

China’s approach to user readiness is defined by its massive scale. The “Smart Education of China” platform serves over 178 million users, making it the world’s largest centre for high-quality educational resources. China has integrated AI literacy as a requirement across all majors at leading institutions like Fudan University, ensuring that even those in the humanities or social sciences are AI-ready.

This diffusion is not limited to education. China’s “East Data, West Computing” initiative treats AI compute as a public utility, similar to water or electricity. By lowering the barriers to entry for SMEs and start-ups, China is fostering a self-sustaining AI infrastructure that integrates into industrial vision, search, advertising, and customer service. This strategy aims for full-stack independence, moving from IP and EDA tools to cloud services and international software standards that rival the West’s CUDA ecosystem.

5. Corporate Pragmatism: The Self-Disruption Mandate

In the private sector, the focus has shifted from “how can AI help us” to “how will AI destroy us if we do not act”. This is the essence of pragmatic action highlighted by Uber and figures like Stephen Bartlett and will.i.am.

Uber: Rewiring the Organisation, from Veneer to Core

The lesson from Davos 2026 is that “play-acting” transformation is no longer sufficient. Uber CEO Dara Khosrowshahi noted the difference between companies adding an “AI veneer” versus those fundamentally rethinking business logic. Uber is not just summarising meetings; they are rebuilding customer service from the ground up, moving from rigid policies to AI agents capable of reasoning through customer problems to find fair solutions.

Saudi Aramco demonstrates the scale of this shift in the industrial sector. By moving from 400 pilots to 100 fully deployed scale cases, they have generated billions in “Technology Realised Value”, using AI to increase well productivity by 30–40% and combat the $3 trillion global corrosion problem.

Stephen Bartlett and the Flightstory Experiment

Entrepreneur Stephen Bartlett offered a masterclass in the experimentation mindset. He has established a “Head of Experimentation and Failure” within his company, Flight Story, with the explicit goal of using AI to kill his current business model before competitors do. This is a radical form of pragmatism that acknowledges the agentic leap — where AI agents take over the core processes of an organisation. In Bartlett’s view, if a process can be automated by an agentic system, it should be disrupted from within to capture the productivity gains and reinvest them into high-value human creativity.

  • Execution: Bartlett used AI translation to grow his Spanish-speaking audience to 28% of total listenership in just 18 months.
  • Prediction: He utilises LLMs to predict audience retention on videos before publishing, achieving 80% predictive accuracy on where viewers will drop off.

will.i.am: The Pivot to AI Music Creation

The creative sector, often the most resistant to AI, is seeing pragmatic leaders like will.i.am embrace the technology as a partner rather than a replacement. By pivoting to AI music creation, he is utilising machine intelligence to expand the boundaries of production and personalisation. This mirrors the broader trend in the knowledge sector, where AI is used to enhance human output — such as research analysts and translators — rather than simply replacing them. The goal is a hybrid human–AI team where AI agents are incorporated as formal members of the organisation with defined responsibilities and performance metrics.

  • Bio-Music: He envisions a future where AI composes music based on cellular vibration and DNA, moving beyond recorded audio to biological expression.
  • The Human Moat: His thesis is that while AI can predict algorithms, humans must remain unpredictable. “Let your agentic self be awesome with predictions. But then you as the human be unpredictable.”

6. The Hidden Costs of Acceleration

Ethical considerations are no longer decorations; they are fundamental risks to societal stability and private security.

  • The Triple Threat: Leaders must mitigate the gender gap (exacerbated by male dominance in AI roles), the mental health crisis (linked to hyper-engaging apps), and cognitive atrophy (the loss of human agency and critical friction due to outsourcing thought).
  • Cyber-security existential risk: A primary Davos signal is the emergence of AI agents as targeted malware. Integrated into mobile operating systems, these agents can gain root permission, allowing them to read screen data at a pixel level. This bypasses the encryption of private systems like Signal, posing a direct threat to the integrity of private communication.

This conflict between operating system vendors and communication privacy represents a new frontier of corporate and individual risk.

7. The Labour Market Tsunami: Georgieva’s Warning

While the pragmatic action of the UAE and China suggests a path towards success, Kristalina Georgieva of the IMF provides a sobering counter-narrative regarding the global labour market. She describes the arrival of AI as a “tsunami hitting the labour market”, with 40% of global jobs exposed to disruption.

"We assess that 40% of jobs globally are going to be impacted by AI over the next couple of years - either enhanced, eliminated, or transformed. In advanced economies, it’s 60%." - Kristalina Georgieva - Managing Director, IMF

The labour tsunami: a shock aimed at knowledge workers

The IMF’s framing of a “labour tsunami” captured the seriousness: a large share of jobs globally are likely to be affected, with the impact concentrated in advanced economies.

The key insight was not “AI will take jobs” (a familiar argument), but how it will distort career structures. The idea introduced was the “cognitive waterline”: the minimum cognitive skill required to add economic value. For decades, technology displaced physical work and pushed humans up into cognitive tasks. Now the displacement is inside the cognitive domain itself. As AI becomes capable of junior analyst and junior coder work, the waterline rises above the entry level. This does not just shift roles; it threatens to erase the traditional first rung of the corporate ladder.

That creates a structural paradox. If AI does the grunt work, how do people develop into senior leaders? Professions built on apprenticeship models—law, accounting, consulting—depend on early-career exposure to volume work to build judgment. If hiring juniors becomes economically irrational, organisations risk creating a future leadership vacuum: partners with no pipeline.

There were two competing prescriptions. One view is that “everyone must be able to build with AI” because the productivity gap between builders and non-builders will become impossible to bridge. Another view is that the safest roles are those anchored in human presence, empathy, complex judgment, and physical reality—care, hospitality, leadership—while roles dominated by words and numbers face direct exposure. Either way, the conclusion is the same: the labour market is being repriced, and the adjustment will be painful.

The IMF’s latest analysis reveals a staggering demand for new skills: one in ten job postings in advanced economies now requires at least one new AI-related skill. This demand is most acute in professional, technical, and managerial roles.

Region Type Job Postings Requiring New Skills Employment Level in AI-Vulnerable Roles
Advanced Economies 1 in 10 3,6% lower in high-demand regions.
Emerging Economies 1 in 20 Mixed; displacement risk vs leapfrog potential.

 

The pragmatism here is visible in the wage market. In the UK and US, job postings that include a new skill pay 3% more, while those requiring four or more new skills pay up to 15% more. However, Georgieva warns that this creates a hollowing out of entry-level positions. Generative AI adoption is reducing entry-level hiring because the tasks traditionally used to train young employees — such as data entry, basic analysis, and administrative support — are now being handled by AI. This creates a “lost generation” risk: young people are expected to arrive AI-ready but have fewer opportunities to learn on the job.

That creates a structural paradox. If AI does the grunt work, how do people develop into senior leaders? Professions built on apprenticeship models—law, accounting, consulting—depend on early-career exposure to volume work to build judgment. If hiring juniors becomes economically irrational, organisations risk creating a future leadership vacuum: partners with no pipeline.

There were two competing prescriptions. One view is that “everyone must be able to build with AI” because the productivity gap between builders and non-builders will become impossible to bridge. Another view is that the safest roles are those anchored in human presence, empathy, complex judgment, and physical reality—care, hospitality, leadership—while roles dominated by words and numbers face direct exposure. Either way, the conclusion is the same: the labour market is being repriced, and the adjustment will be painful.

8. The Global South Divide: Africa and the Missing Foundations

AI advantage is increasingly tied to access to energy and compute. That means opportunities expand for countries that can host and power large-scale models, while contracting for those that cannot. In effect, the ability to produce “intelligence at scale” becomes a new economic dividing line.

The Sovereignty Dilemma

Many African nations are currently demanding data sovereignty — the requirement that citizens’ data be stored and processed locally. However, without local data centre capacity (Layer 3) or a reliable energy grid (Layer 1), these regulations often stall AI adoption entirely. In the GCC, the situation is different; countries have the capital and energy to build local clouds in partnership with hyperscalers like AWS, allowing them to maintain sovereignty while accessing global-scale technology.

For most of Africa, a pragmatic path is required. Instead of trying to build the entire five-layer stack, these economies must identify specific pathways to competitiveness.

Pathway for the Global South Strategic Focus Mechanism
Selective Players Industry-specific AI Using AI for local challenges like agri-food or mining.
Adoption Accelerators Workforce readiness Rapidly upskilling to provide AI-enabled services to the world.
Emerging Collaborators Sovereignty via partnerships Using public clouds with regional data residency.

Case studies from the WEF MINDS initiative show that success is possible even in resource-constrained settings. For example, Tech Mahindra has developed multilingual AI models supporting Hindi and Bahasa, which power 3,8 million monthly queries for citizen services with 92% accuracy — outperforming generic global models. Similarly, Cambridge Industries is using AI drones and smartphones to monitor construction safety and road conditions in Africa, resulting in a 50% reduction in emergency repair costs.

Digital Embassies

The proposed mitigation was the concept of a “digital embassy”: hosting a nation’s data and models in a foreign data centre while retaining legal sovereignty, similar to diplomatic premises. It is a legal solution to a physical constraint, aimed at keeping smaller or grid-constrained nations in the game.

Meanwhile, the major powers are in a different contest. The chip embargo against China was described as a primary constraint on China’s frontier capability. China’s counter-strategy was framed as “ruthless adoption”: if you cannot win on the newest hardware, win by saturating real-world applications—factories, logistics, retail, infrastructure—creating dense feedback loops that drive learning and productivity even with weaker silicon. The result is an asymmetric race: the West leads in chips, the East may lead in integration.

The broader trajectory is away from one global “God model” and toward multiple sovereign models shaped by local language, culture, law, and values.

The tremendous amount of pragmatic action is unevenly distributed, creating a deepening divide for regions like Africa. The challenge for the Global South is that it is being forced to navigate a high-tech revolution while still lacking the foundational layers of the Huang stack — specifically energy and basic digital connectivity.

Rwanda: The 4×4 Strategy

Rwanda provides a case study in using AI to solve chronic shortages. Lacking natural resources, the country has pivoted to a technology-first strategy.

Horizon 1000 is a partnership with OpenAI and the Gates Foundation aimed at integrating AI into 1000 primary healthcare clinics. This is not about replacing doctors, but automating the administrative burden to allow the “4×4 strategy” (quadrupling the healthcare workforce in four years) to succeed.

9. Negative Externalities: The Gender Gap and Mental Health Risk

Pragmatic use of AI is not a universal good; it carries specific risks that could worsen existing societal gaps. The World Economic Forum and the IMF both highlight the potential for AI to accelerate the gender gap and exacerbate mental health issues.

The AI Gender Gap

AI-driven automation is displacing jobs in sectors where women constitute a significant portion of the workforce, such as administrative, clerical, and service roles. Furthermore, if sex differences are not accurately captured in the data used to train AI, resulting models can perpetuate bias in healthcare and hiring.

AI Impact Area Potential Benefit Potential Risk
Healthcare AI platforms like Data42 improve R&D for women’s health. Diagnostic tools trained on male-centric data lead to “blind spots”.
Labour Market AI-driven hiring tools can anonymise applications to reduce bias. Displacement of female-heavy clerical and service roles.
Daily Life Reducing unpaid labour burden through smart automation. Rise of deepfake-based harassment targeting women and girls.

Mental Health and Cognitive Atrophy

The mental health implications of the AI revolution are a growing concern for leaders in Davos. As people develop real and meaningful relationships with AI companions, the lines between human and machine interaction are blurring. While AI-driven mental health tools can provide personalised care and healthy habit support, they also contribute to tech-paranoia and anxiety about job security.

A recent MIT study highlights a more subtle risk: the potential for diminished memory retention and less original thinking due to over-reliance on AI. For the younger generation, 41% report feeling anxious about the technology, and nearly half worry it will harm their ability to think critically. The human advantage in 2026 will belong to those who can maintain cognitive agility and critical judgement in a world where AI agents are doing the heavy lifting.

10. Corporate AI sovereignty

Corporate AI sovereignty, as Satya Nadella framed it, is the strategic discipline of ensuring a firm owns and governs its intelligence layer rather than renting it by the API call. The point is not political sovereignty in the national sense, but enterprise value protection: the tacit knowledge that makes a business distinctive – its heuristics, decision rules, customer context, risk appetite, and operating logic – must be captured and embedded into models, data systems, and workflows the firm controls. If AI becomes a core “co-worker” running processes end-to-end, then outsourcing cognition to generic third-party models turns into value leakage: you pay to use intelligence, but you fail to compound it inside your own organisation, and you steadily commoditise yourself into a thin wrapper around someone else’s brain. In that world, the defensible advantage is not the software licence; it is proprietary cognition under your control, with clear governance over what it knows, how it reasons, how it acts, and who remains accountable.

11. Conclusion: The Strategic Roadmap for 2026 and Beyond

Davos 2026 clarified the direction of travel. AI is no longer a “technology theme” running alongside the business. It is becoming the operating system of competitive advantage – and, increasingly, of national resilience. The debate has moved on from speculative questions about AGI and into the hard mechanics of industrialisation: where intelligence is produced, how it is deployed safely at scale, and who captures the economic value.

For corporate leaders, the message is pragmatic and uncomfortable. Competitive advantage is shifting away from isolated pilots and towards full-stack execution. Jensen Huang’s five-layer stack is now the right mental model: energy and silicon are no longer “someone else’s problem”; they are upstream constraints that determine downstream strategy. The next decade will reward firms that treat infrastructure, compute access, and platform design as board-level matters, not IT procurement.

At the same time, the differentiator is increasingly human readiness. The most accelerated economies are not winning purely by model quality; they are winning by diffusion – embedding AI into education, workflows, and daily operations so the population and workforce compound capability faster than competitors. For companies, this translates directly into organisational speed: your ability to redesign workflows, reskill people, and govern agentic systems will matter more than your choice of vendor.

The most immediate corporate shift is from “AI as a tool” to “AI as a co-worker”. Agentic systems will increasingly run workflows end-to-end, with humans moving from execution to oversight, exception handling, and accountability. That changes operating models, risk posture, and workforce design in one move. It also forces a hard rethink of entry-level talent: if AI absorbs the repetitive cognitive work, firms must rebuild apprenticeship paths that develop judgement, context, and decision-making much earlier than before.

Finally, the sovereignty question is now commercial, not philosophical. If AI becomes a core production input, renting generic intelligence through APIs without capturing your own tacit knowledge is a direct leak of enterprise value. The firms that win will be those that treat “corporate AI sovereignty” as a strategic asset: proprietary cognition embedded into governed platforms the firm controls, improves, and defends.

Three practical messages for CEOs and boards

1) Move from pilots to platform.
The experimentation era is over. Incremental productivity gains will not fund the next phase. The priority is an enterprise intelligence platform that can scale agentic workflows securely, repeatedly, and measurably – with clear governance, auditability, and accountability.

2) Redesign the organisation for agentic workflows.
If software is becoming an operational co-worker, your operating model must change. That means top-down workflow redesign, not bottom-up “usage”. It also means rebuilding role design, controls, and performance management around human oversight and exception handling – not routine execution.

3) Prepare for the transition pain.
The macro paradox is real: higher productivity and growth can coexist with workforce dislocation and a thinning entry-level pipeline. Firms that plan proactively – through redesigned apprenticeship models, AI-enabled learning, and deliberate redeployment – will avoid capability gaps that become existential in five years.

Strategic mandates for 2026

  • Secure the stack: Treat energy, compute access, and infrastructure constraints as strategic dependencies, not utilities. Your AI roadmap is only as credible as your ability to power and run it at scale.

  • Own the intelligence layer: Build corporate AI sovereignty by capturing tacit knowledge into governed workflows, data systems, and models you control. Do not commoditise the firm into a wrapper around external cognition.

  • Go beyond the “AI veneer”: Measure transformation by cycle-time reduction, automated throughput, and decision-quality improvement – not by tool adoption metrics.

  • Rebuild the talent ladder: Replace “grunt work as training” with apprenticeship models that teach judgement, problem-solving, and client context from day one, supported by AI.

  • Strengthen governance and resilience: Agentic systems increase speed and scale – they also increase operational risk. Controls, accountability, and cyber resilience must move upstream into design.

The central point is simple: this is a shift from experimentation to accountability. The winners will be those with the courage to standardise, the discipline to redesign workflows end-to-end, and the foresight to protect enterprise value by owning their intelligence layer. In the industrialisation of intelligence, speed matters – but governance and sovereignty decide who keeps the upside.

Download brochure

Introduction brochure

What we do, case studies and profiles of some of our amazing team.

Download

Our latest podcasts on Spotify
Global Advisors | Quantified Strategy Consulting