ARTIFICIAL INTELLIGENCE
An AI-native strategy firmGlobal Advisors: a consulting leader in defining quantified strategy, decreasing uncertainty, improving decisions, achieving measureable results.
A Different Kind of Partner in an AI World
AI-native strategy
consulting
Experienced hires
We are hiring experienced top-tier strategy consultants
Quantified Strategy
Decreased uncertainty, improved decisions
Global Advisors is a leader in defining quantified strategies, decreasing uncertainty, improving decisions and achieving measureable results.
We specialise in providing highly-analytical data-driven recommendations in the face of significant uncertainty.
We utilise advanced predictive analytics to build robust strategies and enable our clients to make calculated decisions.
We support implementation of adaptive capability and capacity.
Our latest
Thoughts
Global Advisors’ Thoughts: Should you be restructuring (again)?
By Marc Wilson
You don’t take a hospital visit for surgery lightly. In fact, neither do good surgeons. Most recommend conservative treatment first due to risks and trauma involved in surgical procedures. Restructuring is the orthopaedic surgery of corporate change. Yet it is often the go-to option for leaders as they seek to address a problem or spark an improvement.
Restructuring offers quick impact
It is easy to see why restructuring can be so alluring. It has the promise of a quick impact. It will certainly give you that. Yet it should be last option you take in most scenarios.
Most active people have had some nagging injury at some point. Remember that debilitating foot or knee injury? How each movement brought about pain and when things seemed better a return to action brought the injury right back to the fore? When you visited your doctor, he gave two options: a program of physiotherapy over an extended period with a good chance of success or corrective surgery that may or may not fix the problem more quickly. Which did you choose? If you’re like me, the promise of the quick pain with quick solution merited serious consideration. But at the same time, the concern over undergoing surgery with its attendant risks for potential relief without guarantee is hugely concerning.
No amount of physiotherapy will cure a crookedly-healed bone. A good orthopaedic surgeon might perform a procedure that addresses the issues even if painful and with long term recovery consequences.
That’s restructuring. It is the only option for a “crooked bone” equivalent. It may well be the right procedure to address dysfunction, but it has risks. Orthopaedic surgery would not be prescribed to address a muscular dysfunction. Neither should restructuring be executed to deal with a problem person. Surgery would not be undertaken to address a suboptimal athletic action. Neither should restructuring be undertaken to address broken processes. And no amount of surgery will turn an unfit average athlete into a race winner. Neither will restructuring address problems with strategic positioning and corporate fitness. All of that said, a broken structure that results in lack of appropriate focus and political roadblocks can be akin to a compound fracture – no amount of physiotherapy will heal it and poor treatment might well threaten the life of the patient.
What are you dealing with: a poorly performing person, broken processes or a structure that results in poor market focus and impedes optimum function?
Perennial restructuring
Many organisations I have worked with adopt a restructuring exercise every few years. This often coincides with a change in leadership or a poor financial result. It typically occurs after a consulting intervention. When I consult with leadership teams, my warning is a rule of thumb – any major restructure will take one-and-a-half years to deliver results. This is equivalent to full remuneration cycle and some implementation time. The risk of failure is high: the surgery will be painful and the side-effects might be dramatic. Why?
Restructuring involves changes in reporting lines and the relationships between people. This is political change. New ways of working will be tried in an effort to build successful working relationships and please a new boss. Teams will be reformed and require time to form, storm, norm and perform. People will take time to agree, understand and embed their new roles and responsibilities. The effect of incentives will be felt somewhere down the line.
Restructuring is often attempted to avoid the medium-to-long-term delivery of change through process change and mobilisation. As can be seen, this under-appreciates that these and other facets of change are usually required to deliver on the promise of a new structure anyway.
Restructuring creates uncertainty in anticipation
Restructuring also impacts through anticipation. Think of the athlete waiting for surgery. Exercise might stop, mental excuses for current performance might start, dread of the impending pain and recovery might set in. Similarly, personnel waiting for a structural change typically fret over the change in their roles, their reporting relationships and begin to see excuses for poor performance in the status quo. The longer the uncertainty over potential restructuring lasts, the more debilitating the effect.
Leaders feel empowered through restructuring
The role of the leader should also be considered. Leaders often feel powerless or lack capacity and time to implement fundamental change in processes and team performance. They can restructure definitively and feel empowered by doing so. This is equivalent to the athlete overruling the doctors advice and undergoing surgery, knowing that action is taking place – rather than relying on corrective therapeutic action. A great deal of introspection should be undertaken by the leader. “Am I calling for a restructure because I can, knowing that change will result?” Such action can be self-satisfying rather than remedial.
Is structure the source of the problem?
Restructuring and surgery are about people. While both may be necessary, the effects can be severe and may not fix the underlying problem. Leaders should consider the true source of underperformance and practice introspection – “Am I seeking the allure of a quick fix for a problem that require more conservative longer-term treatment?”
Photo by John Chew
Strategy Tools
Strategy Tools: Profit from the Core
Extensive research conducted by Chris Zook and James Allen has shown that many companies have failed to deliver on their growth strategies because they have strayed too far from their core business. Successful companies operate in areas where they have established the “right to win”. The core business is that set of products, capabilities, customers, channels and geographies that maximises their ability to build a right to win. The pursuit of growth in new and exciting often leads companies into products, customers, geographies and channels that are distant from the core. Not only do the non-core areas of the business often suffer in their own right, they distract management from the core business.
Profit from the Core is a back-to-basics strategy which says that developing a strong, well-defined core is the foundation of sustainable, profitable growth. Any new growth should leverage and strengthen the core.
Management following the core methodology should evaluate and prioritise growth along three cyclical steps:
Focus – reach full potential in the core
- Define the core boundaries
- Strengthen core differentiation at the customer
- Drive for superior cost economics
- Mine full potential operating profit from the core
- Discourage competitive investment in the core
For some companies the definition of the core will be obvious, while for others much debate will be required. Executives can ask directive questions to guide the discussion:
- What are the business’ natural economic boundaries defined by customer needs and basic economics?
- What products, customers, channels and competitors do these boundaries encompass?
- What are the core skills and assets needed to compete effectively within that competitive arena?
- What is the core business as defined by those customers, products, technologies and channels through which the company can earn a return today and compete effectively with current resources?
- What is the key differentiating factor that makes the company unique to its core customers?
- What are the adjacent areas around the core?
- Are the definitions of the business and industry likely to shift resulting in a change of the competitive and customer landscape?
Expand – grow through adjacencies
- Protect and extend strengths
- Expand into related adjacencies
- Push the core boundaries out
- Pursue a repeatable growth formula
Companies should expand in a measured basis, pursuing growth opportunities in immediate and sensible adjacencies to the core. A useful tool for evaluating opportunities is the adjacency map, which is constructed by identifying the key core descriptors and mapping opportunities based on their proximity to the core along each descriptor. An example adjacency map is presented below:
Redefine – evaluate if the core definition should be changed
- Pursue profit pools of the future
- Redefine around new and robust differentiation
- Strengthen the operating platform before redefining strategy
- Fully value the power of leadership economics
- Invest heavily in new capabilities
Executives should ask guiding questions to determine whether the core definition is still relevant.
- Is the core business confronted with a radically improved business model for servicing its customers’ needs?
- Are the original boundaries and structure of the core business changing in complicated ways?
- Is there significant turbulence in the industry that may result in the current core definition becoming redundant?
The questions can help identify whether the company should redefine their core and if so, what type of redefinition is required:
The core methodology should be followed and reviewed on an on-going basis. Management must perform the difficult balancing act of ensuring they are constantly striving to grow and reach full potential within the core, looking for new adjacencies which strengthen and leverage the core and being alert and ready for the possibility of redefining the core.
Source: 1 Zook, C – 2001 – “Profit From The Core” – Cambridge, M.A. – Harvard Business School Press
2 Van den Berg, G; Pietersma, P – 2014 – “25 need-to-know strategy tools” – Harlow – FT Publishing
Fast Facts
There is a positive relationship between long production run sizes and OEE
- Evidence suggests that longer run sizes lead to increased overall equipment effectiveness (OEE).
- OEE is a measure of how effectively manufacturing equipment is utilised and is defined as a product of machine availability, machine performance and product quality.
- Increasing run sizes improves availability as a result of less change over time, and performance as a result of less operator inefficiency.
- North America facilities that previously ran at world-class OEE rates, have experienced lower OEE rates due to a move towards reduced lot sizes and shifting large volume production overseas1.
- Shorter run sizes resulted in increased changeover frequency which led to increased planned downtime and reduced asset utilization.
- As a result OEE rates dropped from 85% to as low as 50%1.
Selected News
Quote: Trevor McCourt – Extropic CTO
“We need something like 10 terawatts in the next 20 years to make LLM systems truly useful to everyone… Nvidia would need to 100× output… You basically need to fill Nevada with solar panels to provide 10 terawatts of power, at a cost around the world’s GDP. Totally crazy.” – Trevor McCourt – Extropic CTO
Trevor McCourt, Chief Technology Officer and co-founder of Extropic, has emerged as a leading voice articulating a paradox at the heart of artificial intelligence advancement: the technology that promises to democratise intelligence across the planet may, in fact, be fundamentally unscalable using conventional infrastructure. His observation about the terawatt imperative captures this tension with stark clarity—a reality increasingly difficult to dismiss as speculative.
Who Trevor McCourt Is
McCourt brings a rare convergence of disciplinary expertise to his role. Trained in mechanical engineering at the University of Waterloo (graduating 2015) and holding advanced credentials from the Massachusetts Institute of Technology (2020), he combines rigorous physical intuition with deep software systems architecture. Prior to co-founding Extropic, McCourt worked as a Principal Software Engineer, establishing a track record of delivering infrastructure at scale: he designed microservices-based cloud platforms that improved deployment speed by 40% whilst reducing operational costs by 30%, co-invented a patented dynamic caching algorithm for distributed systems, and led open-source initiatives that garnered over 500 GitHub contributors.
This background—spanning mechanical systems, quantum computation, backend infrastructure, and data engineering—positions McCourt uniquely to diagnose what others in the AI space have overlooked: that energy is not merely a cost line item but a binding physical constraint on AI’s future deployment model.
Extropic, which McCourt co-founded alongside Guillaume Verdon (formerly a quantum technology lead at Alphabet’s X division), closed a $14.1 million Series Seed funding round in 2023, led by Kindred Ventures and backed by institutional investors including Buckley Ventures, HOF Capital, and OSS Capital. The company now stands at approximately 15 people distributed across integrated circuit design, statistical physics research, and machine learning—a lean team assembled to pursue what McCourt characterises as a paradigm shift in compute architecture.
The Quote in Strategic Context
McCourt’s assertion that “10 terawatts in the next 20 years” is required for universal LLM deployment, coupled with his observation that this would demand filling Nevada with solar panels at a cost approaching global GDP, represents far more than rhetorical flourish. It is the product of methodical back-of-the-envelope engineering calculation.
His reasoning unfolds as follows:
From Today’s Baseline to Mass Deployment:
A text-based assistant operating at today’s reasoning capability (approximating GPT-5-Pro performance) deployed to every person globally would consume roughly 20% of the current US electrical grid—approximately 100 gigawatts. This is not theoretical; McCourt derives this from first principles: transformer models consume roughly 2 × (parameters × tokens) floating-point operations; modern accelerators like Nvidia’s H100 operate at approximately 0.7 picojoules per FLOP; population-scale deployment implies continuous, always-on inference at scale.
Adding Modalities and Reasoning:
Upgrade that assistant to include video capability at just 1 frame per second (envisioning Meta-style augmented-reality glasses worn by billions), and the grid requirement multiplies by approximately 10×. Enhance the reasoning capability to match models working on the ARC AGI benchmark—problems of human-level reasoning difficulty—and the text assistant alone requires a 10× expansion: 5 terawatts. Push further to expert-level systems capable of solving International Mathematical Olympiad problems, and the requirement reaches 100× the current grid.
Economic Impossibility:
A single gigawatt data centre costs approximately $10 billion to construct. The infrastructure required for mass-market AI deployment rapidly enters the hundreds of trillions of dollars—approaching or exceeding global GDP. Nvidia’s current manufacturing capacity would itself require a 100-fold increase to support even McCourt’s more modest scenarios.
Physical Reality Check:
Over the past 75 years, US grid capacity has grown remarkably consistently—a nearly linear expansion. Sam Altman’s public commitment to building one gigawatt of data centre capacity per week alone would require 3–5× the historical rate of grid growth. Credible plans for mass-market AI acceleration push this requirement into the terawatt range over two decades—a rate of infrastructure expansion that is not merely economically daunting but potentially physically impossible given resource constraints, construction timelines, and raw materials availability.
McCourt’s conclusion: the energy path is not simply expensive; it is economically and physically untenable. The paradigm must change.
Intellectual Foundations: Leading Theorists in Energy-Efficient Computing and Probabilistic AI
Understanding McCourt’s position requires engagement with the broader intellectual landscape that has shaped thinking about computing’s physical limits and probabilistic approaches to machine learning.
Geoffrey Hinton—Pioneering Energy-Based Models and Probabilistic Foundations:
Few figures loom larger in the theoretical background to Extropic’s work than Geoffrey Hinton. Decades before the deep learning boom, Hinton developed foundational theory around Boltzmann machines and energy-based models (EBMs)—the conceptual framework that treats learning as the discovery and inference of complex probability distributions. His work posits that machine learning, at its essence, is about fitting a probability distribution to observed data and then sampling from it to generate new instances consistent with that distribution. Hinton’s recognition with the 2023 Nobel Prize in Physics for “foundational discoveries and inventions that enable machine learning with artificial neural networks” reflects the deep prescience of this probabilistic worldview. More than theoretical elegance, this framework points toward an alternative computational paradigm: rather than spending vast resources on deterministic matrix operations (the GPU model), a system optimised for efficient sampling from complex distributions would align computation with the statistical nature of intelligence itself.
Michael Frank—Physics of Reversible and Adiabatic Computing:
Michael Frank, a senior scientist now at Vaire (a near-zero-energy chip company), has spent decades at the intersection of physics and computing. His research programme, initiated at MIT in the 1990s and continued at the University of Florida, Florida State, and Sandia National Laboratories, focuses on reversible computing and adiabatic CMOS—techniques aimed at reducing the fundamental energy cost of information processing. Frank’s work addresses a deep truth: in conventional digital logic, information erasure is thermodynamically irreversible and expensive, dissipating energy as heat. By contrast, reversible computing minimises such erasure, thereby approaching theoretical energy limits set by physics rather than by engineering convention. Whilst Frank’s trajectory and Extropic’s diverge in architectural detail, both share the conviction that energy efficiency must be rooted in physical first principles, not merely in engineering optimisation of existing paradigms.
Yoshua Bengio and Chris Bishop—Probabilistic Learning Theory:
Leading researchers in deep generative modelling—including Bengio, Bishop, and others—have consistently advocated for probabilistic frameworks as foundational to machine learning. Their work on diffusion models, variational inference, and sampling-based approaches has legitimised the view that efficient inference is not about raw compute speed but about statistical appropriateness. This theoretical lineage underpins the algorithmic choices at Extropic: energy-based models and denoising thermodynamic models are not novel inventions but rather a return to first principles, informed by decades of probabilistic ML research.
Richard Feynman—Foundational Physics of Computing:
Though less directly cited in contemporary AI discourse, Feynman’s 1982 lectures on the physics of computation remain conceptually foundational. Feynman observed that computation’s energy cost is ultimately governed by physical law, not engineering ingenuity alone. His observations on reversibility and the thermodynamic cost of irreversible operations informed the entire reversible-computing movement and, by extension, contemporary efforts to align computation with physics rather than against it.
Contemporary Systems Thinkers (Sam Altman, Jensen Huang):
Counterintuitively, McCourt’s critique is sharpened by engagement with the visionary statements of industry leaders who have perhaps underestimated energy constraints. Altman’s commitment to building one gigawatt of data centre capacity per week, and Huang’s roadmaps for continued GPU scaling, have inadvertently validated McCourt’s concern: even the most optimistic industrial plans require infrastructure expansion at rates that collide with physical reality. McCourt uses their own projections as evidence for the necessity of paradigm change.
The Broader Strategic Narrative
McCourt’s remarks must be understood within a convergence of intellectual and practical pressures:
The Efficiency Plateau:
Digital logic efficiency, measured as energy per operation, has stalled. Transistor capacitance plateaued around the 10-nanometre node; operating voltage is thermodynamically bounded near 300 millivolts. Architectural optimisations (quantisation, sparsity, tensor cores) improve throughput but do not overcome these physical barriers. The era of “free lunch” efficiency gains from Moore’s Law miniaturisation has ended.
Model Complexity Trajectory:
Whilst small models have improved at fixed benchmarks, frontier AI systems—those solving novel, difficult problems—continue to demand exponentially more compute. AlphaGo required ~1 exaFLOP per game; AlphaCode required ~100 exaFLOPs per coding problem; the system solving International Mathematical Olympiad problems required ~100,000 exaFLOPs. Model miniaturisation is not offsetting capability ambitions.
Market Economics:
The AI market has attracted trillions in capital precisely because the economic potential is genuine and vast. Yet this same vastness creates the energy paradox: truly universal AI deployment would consume resources incompatible with global infrastructure and economics. The contradiction is not marginal; it is structural.
Extropic’s Alternative:
Extropic proposes to escape this local minimum through radical architectural redesign. Thermodynamic Sampling Units (TSUs)—circuits architected as arrays of probabilistic sampling cells rather than multiply-accumulate units—would natively perform the statistical operations that diffusion and generative AI models require. Early simulations suggest energy efficiency improvements of 10,000× on simple benchmarks compared to GPU-based approaches. Hybrid algorithms combining TSUs with compact neural networks on conventional hardware could deliver intermediate gains whilst establishing a pathway toward a fundamentally different compute paradigm.
Why This Matters Now
The quote’s urgency reflects a dawning recognition across technical and policy circles that energy is not a peripheral constraint but the central bottleneck determining AI’s future trajectory. The choice, as McCourt frames it, is stark: either invest in a radically new architecture, or accept that mass-market AI remains perpetually out of reach—a luxury good confined to the wealthy and powerful rather than a technology accessible to humanity.
This is not mere speculation or provocation. It is engineering analysis grounded in physics, economics, and historical precedent, articulated by someone with the technical depth to understand both the problem and the extraordinary difficulty of solving it.

Polls
No Results Found
The page you requested could not be found. Try refining your search, or use the navigation above to locate the post.
Services
Global Advisors is different
We help clients to measurably improve strategic decision-making and the results they achieve through defining clearly prioritised choices, reducing uncertainty, winning hearts and minds and partnering to deliver.
Our difference is embodied in our team. Our values define us.
Corporate portfolio strategy
Define optimal business portfolios aligned with investor expectations
BUSINESS UNIT STRATEGY
Define how to win against competitors
Reach full potential
Understand your business’ core, reach full potential and grow into optimal adjacencies
Deal advisory
M&A, due diligence, deal structuring, balance sheet optimisation
Global Advisors Digital Data Analytics
14 years of quantitative and data science experience
An enabler to delivering quantified strategy and accelerated implementation
Digital enablement, acceleration and data science
Leading-edge data science and digital skills
Experts in large data processing, analytics and data visualisation
Developers of digital proof-of-concepts
An accelerator for Global Advisors and our clients
Join Global Advisors
We hire and grow amazing people
Consultants join our firm based on a fit with our values, culture and vision. They believe in and are excited by our differentiated approach. They realise that working on our clients’ most important projects is a privilege. While the problems we solve are strategic to clients, consultants recognise that solutions primarily require hard work – rigorous and thorough analysis, partnering with client team members to overcome political and emotional obstacles, and a large investment in knowledge development and self-growth.
Get In Touch
16th Floor, The Forum, 2 Maude Street, Sandton, Johannesburg, South Africa
+27114616371





