Select Page

Due Diligence

Your due diligence is probably wrong

Global Advisors: a consulting leader in defining quantified strategy, decreasing uncertainty, improving decisions, achieving measureable results.

Learn MoreThe Global Advisors due diligence practice

Our latest perspective - What's behind under-performing listed companies?

Outperform through the downturn

Experienced hires

We are hiring experienced top-tier strategy consultants

Quantified Strategy

Decreased uncertainty, improved decisions

Global Advisors is a leader in defining quantified strategies, decreasing uncertainty, improving decisions and achieving measureable results.

We specialise in providing highly-analytical data-driven recommendations in the face of significant uncertainty.

We utilise advanced predictive analytics to build robust strategies and enable our clients to make calculated decisions.

We support implementation of adaptive capability and capacity.

Our latest

Thoughts

Global Advisors’ Thoughts: Should you be restructuring (again)?

Global Advisors’ Thoughts: Should you be restructuring (again)?

By Marc Wilson

Photo by John Chew

You don’t take a hospital visit for surgery lightly. In fact, neither do good surgeons. Most recommend conservative treatment first due to risks and trauma involved in surgical procedures. Restructuring is the orthopaedic surgery of corporate change. Yet it is often the go-to option for leaders as they seek to address a problem or spark an improvement.

Restructuring offers quick impact

It is easy to see why restructuring can be so alluring. It has the promise of a quick impact. It will certainly give you that. Yet it should be last option you take in most scenarios.

Most active people have had some nagging injury at some point. Remember that debilitating foot or knee injury? How each movement brought about pain and when things seemed better a return to action brought the injury right back to the fore? When you visited your doctor, he gave two options: a program of physiotherapy over an extended period with a good chance of success or corrective surgery that may or may not fix the problem more quickly. Which did you choose? If you’re like me, the promise of the quick pain with quick solution merited serious consideration. But at the same time, the concern over undergoing surgery with its attendant risks for potential relief without guarantee is hugely concerning.

No amount of physiotherapy will cure a crookedly-healed bone. A good orthopaedic surgeon might perform a procedure that addresses the issues even if painful and with long term recovery consequences.

That’s restructuring. It is the only option for a “crooked bone” equivalent. It may well be the right procedure to address dysfunction, but it has risks. Orthopaedic surgery would not be prescribed to address a muscular dysfunction. Neither should restructuring be executed to deal with a problem person. Surgery would not be undertaken to address a suboptimal athletic action. Neither should restructuring be undertaken to address broken processes. And no amount of surgery will turn an unfit average athlete into a race winner. Neither will restructuring address problems with strategic positioning and corporate fitness. All of that said, a broken structure that results in lack of appropriate focus and political roadblocks can be akin to a compound fracture – no amount of physiotherapy will heal it and poor treatment might well threaten the life of the patient.

What are you dealing with: a poorly performing person, broken processes or a structure that results in poor market focus and impedes optimum function?

Perennial restructuring

Many organisations I have worked with adopt a restructuring exercise every few years. This often coincides with a change in leadership or a poor financial result. It typically occurs after a consulting intervention. When I consult with leadership teams, my warning is a rule of thumb – any major restructure will take one-and-a-half years to deliver results. This is equivalent to full remuneration cycle and some implementation time. The risk of failure is high: the surgery will be painful and the side-effects might be dramatic. Why?

Restructuring involves changes in reporting lines and the relationships between people. This is political change. New ways of working will be tried in an effort to build successful working relationships and please a new boss. Teams will be reformed and require time to form, storm, norm and perform. People will take time to agree, understand and embed their new roles and responsibilities. The effect of incentives will be felt somewhere down the line.

Restructuring is often attempted to avoid the medium-to-long-term delivery of change through process change and mobilisation. As can be seen, this under-appreciates that these and other facets of change are usually required to deliver on the promise of a new structure anyway.

Restructuring creates uncertainty in anticipation

Restructuring also impacts through anticipation. Think of the athlete waiting for surgery. Exercise might stop, mental excuses for current performance might start, dread of the impending pain and recovery might set in. Similarly, personnel waiting for a structural change typically fret over the change in their roles, their reporting relationships and begin to see excuses for poor performance in the status quo. The longer the uncertainty over potential restructuring lasts, the more debilitating the effect.

Leaders feel empowered through restructuring

The role of the leader should also be considered. Leaders often feel powerless or lack capacity and time to implement fundamental change in processes and team performance. They can restructure definitively and feel empowered by doing so. This is equivalent to the athlete overruling the doctors advice and undergoing surgery, knowing that action is taking place – rather than relying on corrective therapeutic action. A great deal of introspection should be undertaken by the leader. “Am I calling for a restructure because I can, knowing that change will result?” Such action can be self-satisfying rather than remedial.

Is structure the source of the problem?

Restructuring and surgery are about people. While both may be necessary, the effects can be severe and may not fix the underlying problem. Leaders should consider the true source of underperformance and practice introspection – “Am I seeking the allure of a quick fix for a problem that require more conservative longer-term treatment?”

Photo by John Chew

read more

Strategy Tools

Strategy Tools: Profit from the Core

Strategy Tools: Profit from the Core

Extensive research conducted by Chris Zook and James Allen has shown that many companies have failed to deliver on their growth strategies because they have strayed too far from their core business. Successful companies operate in areas where they have established the “right to win”. The core business is that set of products, capabilities, customers, channels and geographies that maximises their ability to build a right to win. The pursuit of growth in new and exciting often leads companies into products, customers, geographies and channels that are distant from the core. Not only do the non-core areas of the business often suffer in their own right, they distract management from the core business.

Profit from the Core is a back-to-basics strategy which says that developing a strong, well-defined core is the foundation of sustainable, profitable growth. Any new growth should leverage and strengthen the core.

Management following the core methodology should evaluate and prioritise growth along three cyclical steps:

Management following the core methodology should evaluate and prioritise growth along three cyclical steps

Focus – reach full potential in the core

  • Define the core boundaries
  • Strengthen core differentiation at the customer
  • Drive for superior cost economics
  • Mine full potential operating profit from the core
  • Discourage competitive investment in the core

For some companies the definition of the core will be obvious, while for others much debate will be required. Executives can ask directive questions to guide the discussion:

  • What are the business’ natural economic boundaries defined by customer needs and basic economics?
  • What products, customers, channels and competitors do these boundaries encompass?
  • What are the core skills and assets needed to compete effectively within that competitive arena?
  • What is the core business as defined by those customers, products, technologies and channels through which the company can earn a return today and compete effectively with current resources?
  • What is the key differentiating factor that makes the company unique to its core customers?
  • What are the adjacent areas around the core?
  • Are the definitions of the business and industry likely to shift resulting in a change of the competitive and customer landscape?

Expand – grow through adjacencies

  • Protect and extend strengths
  • Expand into related adjacencies
  • Push the core boundaries out
  • Pursue a repeatable growth formula

Companies should expand in a measured basis, pursuing growth opportunities in immediate and sensible adjacencies to the core. A useful tool for evaluating opportunities is the adjacency map, which is constructed by identifying the key core descriptors and mapping opportunities based on their proximity to the core along each descriptor. An example adjacency map is presented below:

Adjacency Map

Redefine – evaluate if the core definition should be changed

  • Pursue profit pools of the future
  • Redefine around new and robust differentiation
  • Strengthen the operating platform before redefining strategy
  • Fully value the power of leadership economics
  • Invest heavily in new capabilities

Executives should ask guiding questions to determine whether the core definition is still relevant.

  • Is the core business confronted with a radically improved business model for servicing its customers’ needs?
  • Are the original boundaries and structure of the core business changing in complicated ways?
  • Is there significant turbulence in the industry that may result in the current core definition becoming redundant?

The questions can help identify whether the company should redefine their core and if so, what type of redefinition is required:

Core redefinition

The core methodology should be followed and reviewed on an on-going basis. Management must perform the difficult balancing act of ensuring they are constantly striving to grow and reach full potential within the core, looking for new adjacencies which strengthen and leverage the core and being alert and ready for the possibility of redefining the core.

Source: 1 Zook, C – 2001 – “Profit From The Core” – Cambridge, M.A. – Harvard Business School Press
2 Van den Berg, G; Pietersma, P – 2014 – “25 need-to-know strategy tools” – Harlow – FT Publishing

read more

Fast Facts

There is a positive relationship between long production run sizes and OEE

Capture

  • Evidence suggests that longer run sizes lead to increased overall equipment effectiveness (OEE).
  • OEE is a measure of how effectively manufacturing equipment is utilised and is defined as a product of machine availability, machine performance and product quality.
  • Increasing run sizes improves availability as a result of less change over time, and performance as a result of less operator inefficiency.
  • North America facilities that previously ran at world-class OEE rates, have experienced lower OEE rates due to a move towards reduced lot sizes and shifting large volume production overseas1.
    • Shorter run sizes resulted in increased changeover frequency which led to increased planned downtime and reduced asset utilization.
    • As a result OEE rates dropped from 85% to as low as 50%1.
read more

Selected News

Quote: Jonathan Ross – CEO Groq

Quote: Jonathan Ross – CEO Groq

“The countries that control compute will control AI. You cannot have compute without energy.” – Jonathan Ross – CEO Groq

Jonathan Ross stands at the intersection of geopolitics, energy economics, and technological determinism. As founder and CEO of Groq, the Silicon Valley firm challenging Nvidia’s dominance in AI infrastructure, Ross articulated a proposition of stark clarity during his September 2025 appearance on Harry Stebbings’ 20VC podcast: “The countries that control compute will control AI. You cannot have compute without energy.”

This observation transcends technical architecture. Ross is describing the emergence of a new geopolitical currency—one where computational capacity, rather than traditional measures of industrial might, determines economic sovereignty and strategic advantage in the 21st century. His thesis rests on an uncomfortable reality: artificial intelligence, regardless of algorithmic sophistication or model architecture, cannot function without the physical substrate of compute. And compute, in turn, cannot exist without abundant, reliable energy.

The Architecture of Advantage

Ross’s perspective derives from direct experience building the infrastructure that powers modern AI. At Google, he initiated what became the Tensor Processing Unit (TPU) project—custom silicon that allowed the company to train and deploy machine learning models at scale. This wasn’t academic research; it was the foundation upon which Google’s AI capabilities were built. When Amazon and Microsoft attempted to recruit him in 2016 to develop similar capabilities, Ross recognised a pattern: the concentration of advanced AI compute in too few hands represented a strategic vulnerability.

His response was to establish Groq in 2016, developing Language Processing Units optimised for inference—the phase where trained models actually perform useful work. The company has since raised over $3 billion and achieved a valuation approaching $7 billion, positioning itself as one of Nvidia’s most credible challengers in the AI hardware market. But Ross’s ambitions extend beyond corporate competition. He views Groq’s mission as democratising access to compute—creating abundant supply where artificial scarcity might otherwise concentrate power.

The quote itself emerged during a discussion about global AI competitiveness. Ross had been explaining why European nations, despite possessing strong research talent and model development capabilities (Mistral being a prominent example), risk strategic irrelevance without corresponding investment in computational infrastructure and energy capacity. A brilliant model without compute to run it, he argued, will lose to a mediocre model backed by ten times the computational resources. This isn’t theoretical—it’s the lived reality of the current AI landscape, where rate limits and inference capacity constraints determine what services can scale and which markets can be served.

The Energy Calculus

The energy dimension of Ross’s statement carries particular weight. Modern AI training and inference require extraordinary amounts of electrical power. The hyperscalers—Google, Microsoft, Amazon, Meta—are each committing tens of billions of dollars annually to AI infrastructure, with significant portions dedicated to data centre construction and energy provision. Microsoft recently announced it wouldn’t make certain GPU clusters available through Azure because the company generated higher returns using that compute internally rather than renting it to customers. This decision, more than any strategic presentation, reveals the economic value density of AI compute.

Ross draws explicit parallels to the early petroleum industry: a period of chaotic exploration where a few “gushers” delivered extraordinary returns whilst most ventures yielded nothing. In this analogy, compute is the new oil—a fundamental input that determines economic output and strategic positioning. But unlike oil, compute demand doesn’t saturate. Ross describes AI demand as “insatiable”: if OpenAI or Anthropic received twice their current inference capacity, their revenue would nearly double within a month. The bottleneck isn’t customer appetite; it’s supply.

This creates a concerning dynamic for nations without indigenous energy abundance or the political will to develop it. Ross specifically highlighted Europe’s predicament: impressive AI research capabilities undermined by insufficient energy infrastructure and regulatory hesitance around nuclear power. He contrasted this with Norway’s renewable capacity (80% wind utilisation) or Japan’s pragmatic reactivation of nuclear facilities—examples of countries aligning energy policy with computational ambition. The message is uncomfortable but clear: technical sophistication in model development cannot compensate for material disadvantage in energy and compute capacity.

Strategic Implications

The geopolitical dimension becomes more acute when considering China’s position. Ross noted that whilst Chinese models like DeepSeek may be cheaper to train (through various optimisations and potential subsidies), they remain more expensive to run at inference—approximately ten times more costly per token generated. This matters because inference, not training, determines scalability and market viability. China can subsidise AI deployment domestically, but globally—what Ross terms the “away game”—cost structure determines competitiveness. Countries cannot simply construct nuclear plants at will; energy infrastructure takes decades to build.

This asymmetry creates opportunity for nations with existing energy advantages. The United States, despite higher nominal costs, benefits from established infrastructure and diverse energy sources. However, Ross’s framework suggests this advantage is neither permanent nor guaranteed. Control over compute requires continuous investment in both silicon capability and energy generation. Nations that fail to maintain pace risk dependency—importing not just technology, but the capacity for economic and strategic autonomy.

The corporate analogy proves instructive. Ross predicts that every major AI company—OpenAI, Anthropic, Google, and others—will eventually develop proprietary chips, not necessarily to outperform Nvidia technically, but to ensure supply security and strategic control. Nvidia currently dominates not purely through superior GPU architecture, but through control of high-bandwidth memory (HBM) supply chains. Building custom silicon allows organisations to diversify supply and avoid allocation constraints that might limit their operational capacity. What applies to corporations applies equally to nations: vertical integration in compute infrastructure is increasingly a prerequisite for strategic autonomy.

The Theorists and Precedents

Ross’s thesis echoes several established frameworks in economic and technological thought, though he synthesises them into a distinctly contemporary proposition.

Harold Innis, the Canadian economic historian, developed the concept of “staples theory” in the 1930s and 1940s—the idea that economies organised around the extraction and export of key commodities (fur, fish, timber, oil) develop institutional structures, trade relationships, and power dynamics shaped by those materials. Innis later extended this thinking to communication technologies in works like Empire and Communications (1950) and The Bias of Communication (1951), arguing that the dominant medium of a society shapes its political and social organisation. Ross’s formulation applies Innisian logic to computational infrastructure: the nations that control the “staples” of the AI economy—energy and compute—will shape the institutional and economic order that emerges.

Carlota Perez, the Venezuelan-British economist, provided a framework for understanding technological revolutions in Technological Revolutions and Financial Capital (2002). Perez identified how major technological shifts (steam power, railways, electricity, mass production, information technology) follow predictable patterns: installation phases characterised by financial speculation and infrastructure building, followed by deployment phases where the technology becomes economically productive. Ross’s observation about current AI investment—massive capital expenditure by hyperscalers, uncertain returns, experimental deployment—maps cleanly onto Perez’s installation phase. The question, implicit in his quote, is which nations will control the infrastructure when the deployment phase arrives and returns become tangible.

W. Brian Arthur, economist and complexity theorist, articulated the concept of “increasing returns” in technology markets through works like Increasing Returns and Path Dependence in the Economy (1994). Arthur demonstrated how early advantages in technology sectors compound through network effects, learning curves, and complementary ecosystems—creating winner-take-most dynamics rather than the diminishing returns assumed in classical economics. Ross’s emphasis on compute abundance follows this logic: early investment in computational infrastructure creates compounding advantages in AI capability, which drives economic returns, which fund further compute investment. Nations entering this cycle late face escalating barriers to entry.

Joseph Schumpeter, the Austrian-American economist, introduced the concept of “creative destruction” in Capitalism, Socialism and Democracy (1942)—the idea that economic development proceeds through radical innovation that renders existing capital obsolete. Ross explicitly invokes Schumpeterian dynamics when discussing the risk that next-generation AI chips might render current hardware unprofitable before it amortises. This uncertainty amplifies the strategic calculus: nations must invest in compute infrastructure knowing that technological obsolescence might arrive before economic returns materialise. Yet failing to invest guarantees strategic irrelevance.

William Stanley Jevons, the 19th-century English economist, observed what became known as Jevons Paradox in The Coal Question (1865): as technology makes resource use more efficient, total consumption typically increases rather than decreases because efficiency makes the resource more economically viable for new applications. Ross applies this directly to AI compute, noting that as inference becomes cheaper (through better chips or more efficient models), demand expands faster than costs decline. This means the total addressable market for compute grows continuously—making control over production capacity increasingly valuable.

Nicholas Georgescu-Roegen, the Romanian-American economist, pioneered bioeconomics and introduced entropy concepts to economic analysis in The Entropy Law and the Economic Process (1971). Georgescu-Roegen argued that economic activity is fundamentally constrained by thermodynamic laws—specifically, that all economic processes dissipate energy and cannot be sustained without continuous energy inputs. Ross’s insistence that “you cannot have compute without energy” is pure Georgescu-Roegen: AI systems, regardless of algorithmic elegance, are bound by physical laws. Compute is thermodynamically expensive—training large models requires megawatts, inference at scale requires sustained power generation. Nations without access to abundant energy cannot sustain AI economies, regardless of their talent or capital.

Mancur Olson, the American economist and political scientist, explored collective action problems and the relationship between institutional quality and economic outcomes in works like The Rise and Decline of Nations (1982). Olson demonstrated how established interest groups can create institutional sclerosis that prevents necessary adaptation. Ross’s observations about European regulatory hesitance and infrastructure underinvestment reflect Olsonian dynamics: incumbent energy interests, environmental lobbies, and risk-averse political structures prevent the aggressive nuclear or renewable expansion required for AI competitiveness. Meanwhile, nations with different institutional arrangements (or greater perceived strategic urgency) act more decisively.

Paul Romer, the American economist and Nobel laureate, developed endogenous growth theory, arguing in works like “Endogenous Technological Change” (1990) that economic growth derives from deliberate investment in knowledge and technology rather than external factors. Romer’s framework emphasises the non-rivalry of ideas (knowledge can be used by multiple actors simultaneously) but the rivalry of physical inputs required to implement them. Ross’s thesis fits perfectly: AI algorithms can be copied and disseminated, but the computational infrastructure to deploy them at scale cannot. This creates a fundamental asymmetry that determines economic power.

The Historical Pattern

History provides sobering precedents for resource-driven geopolitical competition. Britain’s dominance in the 19th century rested substantially on coal abundance that powered industrial machinery and naval supremacy. The United States’ 20th-century ascendance correlated with petroleum access and the industrial capacity to refine and deploy it. Oil-dependent economies in the Middle East gained geopolitical leverage disproportionate to their population or industrial capacity purely through energy reserves.

Ross suggests we are witnessing the emergence of a similar dynamic, but with a critical difference: AI compute is both resource-intensive (requiring enormous energy) and productivity-amplifying (making other economic activity more efficient). This creates a multiplicative effect where compute advantages compound through both direct application (better AI services) and indirect effects (more efficient production of goods and services across the economy). A nation with abundant compute doesn’t just have better chatbots—it has more efficient logistics, agricultural systems, manufacturing processes, and financial services.

The “away game” concept Ross introduced during the podcast discussion adds a critical dimension. China, despite substantial domestic AI investment and capabilities, faces structural disadvantages in global competition because international customers cannot simply replicate China’s energy subsidies or infrastructure. This creates opportunities for nations with more favourable cost structures or energy profiles, but only if they invest in both compute capacity and energy generation.

The Future Ross Envisions

Throughout the podcast, Ross painted a vision of AI-driven abundance that challenges conventional fears of technological unemployment. He predicts labour shortages, not mass unemployment, driven by three mechanisms: deflationary pressure (AI makes goods and services cheaper), workforce opt-out (people work less as living costs decline), and new industry creation (entirely new job categories emerge, like “vibe coding”—programming through natural language rather than formal syntax).

This optimistic scenario depends entirely on computational abundance. If compute remains scarce and concentrated, AI benefits accrue primarily to those controlling the infrastructure. Ross’s mission with Groq—creating faster deployment cycles (six months versus two years for GPUs), operating globally distributed data centres, optimising for cost efficiency rather than margin maximisation—aims to prevent that concentration. But the same logic applies at the national level. Countries without indigenous compute capacity will import AI services, capturing some productivity benefits but remaining dependent on external providers for the infrastructure that increasingly underpins economic activity.

The comparison Ross offers—LLMs as “telescopes of the mind”—is deliberately chosen. Galileo’s telescope revolutionised human understanding but required specific material capabilities to construct and use. Nations without optical manufacturing capacity could not participate in astronomical discovery. Similarly, nations without computational and energy infrastructure cannot participate fully in the AI economy, regardless of their algorithmic sophistication or research talent.

Conclusion

Ross’s statement—”The countries that control compute will control AI. You cannot have compute without energy”—distils a complex geopolitical and economic reality into stark clarity. It combines Innisian materialism (infrastructure determines power), Schumpeterian dynamism (innovation renders existing capital obsolete), Jevonsian counterintuition (efficiency increases total consumption), and Georgescu-Roegen’s thermodynamic constraints (economic activity requires energy dissipation).

The implications are uncomfortable for nations unprepared to make the necessary investments. Technical prowess in model development provides no strategic moat if the computational infrastructure to deploy those models remains controlled elsewhere. Energy abundance, or the political will to develop it, becomes a prerequisite for AI sovereignty. And AI sovereignty increasingly determines economic competitiveness across sectors.

Ross occupies a unique vantage point—neither pure academic nor disinterested observer, but an operator building the infrastructure that will determine whether his prediction proves correct. Groq’s valuation and customer demand suggest the market validates his thesis. Whether nations respond with corresponding urgency remains an open question. But the framework Ross articulates will likely define strategic competition for the remainder of the decade: compute as currency, energy as prerequisite, and algorithmic sophistication as necessary but insufficient for competitive advantage.

read more

Polls

No Results Found

The page you requested could not be found. Try refining your search, or use the navigation above to locate the post.

Services

Global Advisors is different

We help clients to measurably improve strategic decision-making and the results they achieve through defining clearly prioritised choices, reducing uncertainty, winning hearts and minds and partnering to deliver.

Our difference is embodied in our team. Our values define us.

Corporate portfolio strategy

Define optimal business portfolios aligned with investor expectations

BUSINESS UNIT STRATEGY

Define how to win against competitors

Reach full potential

Understand your business’ core, reach full potential and grow into optimal adjacencies

Deal advisory

M&A, due diligence, deal structuring, balance sheet optimisation

Global Advisors Digital Data Analytics

14 years of quantitative and data science experience

An enabler to delivering quantified strategy and accelerated implementation

Digital enablement, acceleration and data science

Leading-edge data science and digital skills

Experts in large data processing, analytics and data visualisation

Developers of digital proof-of-concepts

An accelerator for Global Advisors and our clients

Join Global Advisors

We hire and grow amazing people

Consultants join our firm based on a fit with our values, culture and vision. They believe in and are excited by our differentiated approach. They realise that working on our clients’ most important projects is a privilege. While the problems we solve are strategic to clients, consultants recognise that solutions primarily require hard work – rigorous and thorough analysis, partnering with client team members to overcome political and emotional obstacles, and a large investment in knowledge development and self-growth.

Get In Touch

16th Floor, The Forum, 2 Maude Street, Sandton, Johannesburg, South Africa
+27114616371

Global Advisors | Quantified Strategy Consulting