“Just imagine if your firm is not able to embed the tacit knowledge of the firm in a set of weights in a model that you control… you’re leaking enterprise value to some model company somewhere.” – Satya Nadella – CEO, Microsoft
Satya Nadella’s assertion about enterprise sovereignty represents a fundamental reorientation in how organisations must think about artificial intelligence strategy. Speaking at the World Economic Forum in Davos in January 2026, the Microsoft CEO articulated a principle that challenges conventional wisdom about data protection and corporate control in the AI age. His argument centres on a deceptively simple but profound distinction: the location of data centres matters far less than the ability of a firm to encode its unique organisational knowledge into AI models it owns and controls.
The Context of Nadella’s Intervention
Nadella’s remarks emerged during a high-profile conversation with Laurence Fink, CEO of BlackRock, at the 56th Annual Meeting of the World Economic Forum. The discussion occurred against a backdrop of mounting concern about whether the artificial intelligence boom represents genuine technological transformation or speculative excess. Nadella framed the stakes explicitly: “For this not to be a bubble, by definition, it requires that the benefits of this are much more evenly spread.” The conversation with Fink, one of the world’s most influential voices on capital allocation and corporate governance, provided a platform for Nadella to articulate what he termed “the topic that’s least talked about, but I feel will be most talked about in this calendar year”-the question of firm sovereignty in an AI-driven economy.
The timing of this intervention proved significant. By early 2026, the initial euphoria surrounding large language models and generative AI had begun to encounter practical constraints. Organisations worldwide were grappling with the challenge of translating AI capabilities into measurable business outcomes. Nadella’s contribution shifted the conversation from infrastructure and model capability to something more fundamental: the strategic imperative of organisational control over AI systems that encode proprietary knowledge.
Understanding Tacit Knowledge and Enterprise Value
Central to Nadella’s argument is the concept of tacit knowledge-the accumulated, often uncodified understanding that emerges from how people work together within an organisation. This includes the informal processes, institutional memory, decision-making heuristics, and domain expertise that distinguish one firm from another. Nadella explained this concept by reference to what firms fundamentally do: “it’s all about the tacit knowledge we have by working as people in various departments and moving paper and information.”
The critical insight is that this tacit knowledge represents genuine competitive advantage. When a firm fails to embed this knowledge into AI models it controls, that advantage leaks away. Instead of strengthening the organisation’s position, the firm becomes dependent on external model providers-what Nadella termed “leaking enterprise value to some model company somewhere.” This dependency creates a structural vulnerability: the organisation’s competitive differentiation becomes hostage to the capabilities and pricing decisions of third-party AI vendors.
Nadella’s framing inverts the conventional hierarchy of concerns about AI governance. Policymakers and corporate security teams have traditionally prioritised data sovereignty-ensuring that sensitive information remains within national or corporate boundaries. Nadella argues this focus misses the more consequential question. The physical location of data centres, he stated bluntly, is “the least important thing.” What matters is whether the firm possesses the capability to translate its distinctive knowledge into proprietary AI models.
The Structural Transformation of Information Flow
Nadella’s argument gains force when situated within his broader analysis of how AI fundamentally restructures organisations. He described AI as creating “a complete inversion of how information is flowing in the organisation.” Traditional corporate hierarchies operate through vertical information flows: data and insights move upward through departments and specialisations, where senior leaders synthesise information and make decisions that cascade downward.
AI disrupts this architecture. When knowledge workers gain access to what Nadella calls “infinite minds”-the ability to tap into vast computational reasoning power-information flows become horizontal and distributed. This flattening of hierarchies creates both opportunity and risk. The opportunity lies in accelerated decision-making and the democratisation of analytical capability. The risk emerges when organisations fail to adapt their structures and processes to this new reality. More critically, if firms cannot embed their distinctive knowledge into models they control, they lose the ability to shape how this new information flow operates within their own context.
This structural transformation explains why Nadella emphasises what he calls “context engineering.” The intelligence layer of any AI system, he argues, “is only as good as the context you give it.” Organisations must learn to feed their proprietary knowledge, decision frameworks, and domain expertise into AI systems in ways that amplify rather than replace human judgment. This requires not merely deploying off-the-shelf models but developing the organisational capability to customise and control AI systems around their specific knowledge base.
The Sovereignty Framework: Beyond Geography
Nadella’s reconceptualisation of sovereignty represents a significant departure from how policymakers and corporate leaders have traditionally understood the term. Geopolitical sovereignty concerns have dominated discussions of AI governance-questions about where data is stored, which country’s regulations apply, and whether foreign entities can access sensitive information. These concerns remain legitimate, but Nadella argues they address a secondary question.
True sovereignty in the AI era, by his analysis, means the ability of a firm to encode its competitive knowledge into models it owns and controls. This requires three elements: first, the technical capability to train and fine-tune AI models on proprietary data; second, the organisational infrastructure to continuously update these models as the firm’s knowledge evolves; and third, the strategic discipline to resist the temptation to outsource these capabilities to external vendors.
The stakes of this sovereignty question extend beyond individual firms. Nadella frames it as a matter of enterprise value creation and preservation. When firms leak their tacit knowledge to external model providers, they simultaneously transfer the economic value that knowledge generates. Over time, this creates a structural advantage for the model companies and a corresponding disadvantage for the organisations that depend on them. The firm becomes a consumer of AI capability rather than a creator of competitive advantage through AI.
The Legitimacy Challenge and Social Permission
Nadella’s argument about enterprise sovereignty connects to a broader concern he articulated about AI’s long-term viability. He warned that “if we are not talking about health outcomes, education outcomes, public sector efficiency, private sector competitiveness, we will quickly lose the social permission to use scarce energy to generate tokens.” This framing introduces a crucial constraint: AI’s continued development and deployment depends on demonstrable benefits that extend beyond technology companies and their shareholders.
The question of firm sovereignty becomes relevant to this legitimacy challenge. If AI benefits concentrate among a small number of model providers whilst other organisations become dependent consumers, the technology risks losing public and political support. Conversely, if firms across the economy develop the capability to embed their knowledge into AI systems they control, the benefits of AI diffuse more broadly. This diffusion becomes the mechanism through which AI maintains its social licence to operate.
Nadella identified “skilling” as the limiting factor in this diffusion process. How broadly people across organisations develop capability in AI determines how quickly benefits spread. This connects directly to the sovereignty question: organisations that develop internal capability to control and customise AI systems create more opportunities for their workforce to develop AI skills. Those that outsource AI to external providers create fewer such opportunities.
Leading Theorists and Intellectual Foundations
Nadella’s argument draws on and extends several streams of organisational and economic theory. The concept of tacit knowledge itself originates in the work of Michael Polanyi, the Hungarian-British polymath who argued in his 1966 work The Tacit Dimension that “we know more than we can tell.” Polanyi distinguished between explicit knowledge-information that can be codified and transmitted-and tacit knowledge, which resides in practice, experience, and embodied understanding. This distinction proved foundational for subsequent research on organisational learning and competitive advantage.
Building on Polanyi’s framework, scholars including David Teece and Ikujiro Nonaka developed theories of how organisations create and leverage knowledge. Teece’s concept of “dynamic capabilities”-the ability of firms to integrate, build, and reconfigure internal and external competencies-directly parallels Nadella’s argument about embedding tacit knowledge into AI models. Nonaka’s research on knowledge creation in Japanese firms emphasised the importance of converting tacit knowledge into explicit forms that can be shared and leveraged across organisations. Nadella’s argument suggests that AI models represent a new mechanism for this conversion: translating tacit organisational knowledge into explicit algorithmic form.
The concept of “firm-specific assets” in strategic management theory also underpins Nadella’s reasoning. Scholars including Edith Penrose and later resource-based theorists argued that competitive advantage derives from assets and capabilities that are difficult to imitate and specific to particular organisations. Nadella extends this logic to the AI era: the ability to embed firm-specific knowledge into proprietary AI models becomes itself a firm-specific asset that generates competitive advantage.
More recently, scholars studying digital transformation and platform economics have grappled with questions of control and dependency. Researchers including Shoshana Zuboff have examined how digital platforms concentrate power and value by controlling the infrastructure through which information flows. Nadella’s argument about enterprise sovereignty can be read as a response to these concerns: organisations must develop the capability to control their own AI infrastructure rather than becoming dependent on platform providers.
The concept of “information asymmetry” from economics also illuminates Nadella’s argument. When firms outsource AI to external providers, they create information asymmetries: the model provider possesses detailed knowledge of how the firm’s data and knowledge are being processed, whilst the firm itself may lack transparency into the model’s decision-making processes. This asymmetry creates both security risks and strategic vulnerability.
Practical Implications and Organisational Change
Nadella’s argument carries significant implications for how organisations should approach AI strategy. Rather than viewing AI primarily as a technology to be purchased from external vendors, firms should conceptualise it as a capability to be developed internally. This requires investment in three areas: technical infrastructure for training and deploying models; talent acquisition and development in machine learning and data science; and organisational redesign to align workflows with how AI systems operate.
The last point proves particularly important. Nadella emphasised that “the mindset we as leaders should have is, we need to think about changing the work-the workflow-with the technology.” This represents a significant departure from how many organisations have approached technology adoption. Rather than fitting new technology into existing workflows, organisations must redesign workflows around how AI operates. This includes flattening information hierarchies, enabling distributed decision-making, and creating feedback loops through which AI systems continuously learn from organisational experience.
Nadella also introduced the concept of a “barbell adoption” strategy. Startups, he noted, adapt easily to AI because they lack legacy systems and established workflows. Large enterprises possess valuable assets and accumulated knowledge but face significant change management challenges. The barbell approach suggests that organisations should pursue both paths simultaneously: experimenting with new AI-native processes whilst carefully managing the transition of legacy systems.
The Measurement Challenge: Tokens per Dollar per Watt
Nadella introduced a novel metric for evaluating AI’s economic impact: “tokens per dollar per watt.” This metric captures the efficiency with which organisations can generate computational reasoning power relative to energy consumption and cost. The metric reflects Nadella’s argument that AI’s economic value depends not on the sophistication of models but on how efficiently organisations can deploy and utilise them.
This metric also connects to the sovereignty question. Organisations that control their own AI infrastructure can optimise this metric for their specific needs. Those dependent on external providers must accept the efficiency parameters those providers establish. Over time, this difference in optimisation capability compounds into significant competitive advantage.
The Broader Economic Transformation
Nadella situated his argument about enterprise sovereignty within a broader analysis of how AI transforms economic structure. He drew parallels to previous technological revolutions, particularly the personal computing era. Steve Jobs famously described the personal computer as a “bicycle for the mind”-a tool that amplified human capability. Bill Gates spoke of “information at your fingertips.” Nadella argues that AI represents these concepts “10x, 100x” more powerful.
However, this amplification of capability only benefits organisations that can control how it operates within their context. When firms outsource AI to external providers, they forfeit the ability to shape how this amplification occurs. They become consumers of capability rather than creators of competitive advantage.
Nadella’s vision of AI diffusion requires what he terms “ubiquitous grids of energy and tokens”-infrastructure that makes AI capability as universally available as electricity. However, this infrastructure alone proves insufficient. Organisations must also develop the internal capability to embed their knowledge into AI systems. Without this capability, even ubiquitous infrastructure benefits only those firms that control the models running on it.
Conclusion: Knowledge as the New Frontier
Nadella’s argument represents a significant reorientation in how organisations should think about AI strategy and competitive advantage. Rather than focusing on data location or infrastructure ownership, firms should prioritise their ability to embed proprietary knowledge into AI models they control. This shift reflects a deeper truth about how AI creates value: not through raw computational power or data volume, but through the ability to translate organisational knowledge into algorithmic form that amplifies human decision-making.
The sovereignty question Nadella articulated-whether firms can embed their tacit knowledge into models they control-will likely prove central to AI strategy for years to come. Organisations that develop this capability will preserve and enhance their competitive advantage. Those that outsource this capability to external providers risk gradually transferring their distinctive knowledge and the value it generates to those providers. In an era when AI increasingly mediates how organisations operate, the ability to control the models that encode organisational knowledge becomes itself a fundamental source of competitive advantage and strategic sovereignty.
References
1. https://www.teamday.ai/ai/satya-nadella-davos-ai-diffusion-larry-fink
3. https://www.youtube.com/watch?v=zyNWbPBkq6E
4. https://www.youtube.com/watch?v=1co3zt3-r7I
5. https://www.theregister.com/2026/01/21/nadella_ai_sovereignty_wef/

