ARTIFICIAL INTELLIGENCE
An AI-native strategy firmGlobal Advisors: a consulting leader in defining quantified strategy, decreasing uncertainty, improving decisions, achieving measureable results.
A Different Kind of Partner in an AI World
AI-native strategy
consulting
Experienced hires
We are hiring experienced top-tier strategy consultants
Quantified Strategy
Decreased uncertainty, improved decisions
Global Advisors is a leader in defining quantified strategies, decreasing uncertainty, improving decisions and achieving measureable results.
We specialise in providing highly-analytical data-driven recommendations in the face of significant uncertainty.
We utilise advanced predictive analytics to build robust strategies and enable our clients to make calculated decisions.
We support implementation of adaptive capability and capacity.
Our latest
Thoughts
Global Advisors’ Thoughts: Outperforming through the downturn AND the cost of ignoring full potential
Press drew attention last year to a slew of JSE-listed companies whose share prices had collapsed over the past few years. Some were previous investor darlings. Analysis pointed to a toxic combination of decreasing earnings growth and increased leverage. While this might be a warning to investors of a company in trouble, what fundamentals drive this combination?
In our analysis, company expansion driven by the need to compensate for poor performance in their core business is a typical driver of exactly this outcome.
This article was written in January 2020 but publication was delayed due to the outbreak of Covid-19. Five months after South Africa’s first case, we update our analysis and show that core-based companies outperformed diverse peers by 29% over the period.
Management should always seek to reach full potential in their core business. Attempts to expand should be to a clearly logical set of adjacencies to which they can apply their capabilities using a repeatable business model.
In the article “Steinhoff, Tongaat, Omnia… Here’s the dead giveaway that you should have avoided these companies, says an asset manager,” (Business Insider SA, Jun 11, 2019) Helena Wasserman lists a number of Johannesburg Stock Exchange (JSE) listed shares that have plummeted in recent years.
In many cases these companies’ corresponding sectors have been declining. However, in most of the sectors there is at least one company that has outperformed the rest. What is it about these outperformers that distinguishes them from the rest?
The outperformers have typically shown strong financial performance – be that Growth, ROE, ROA, RONA or Asset Turnover – and varying degrees of leverage. However, performance against these metrics is by no means consistent – see our analysis.
What is consistent is that the outperformers all show clearly delineated core businesses and ongoing growth towards full potential in these businesses alongside growth into clear adjacencies that protect, enhance and leverage the core. In some cases, the core may have been or is currently being redefined, typically through gradual, step-wise extension along logical adjacencies. Redefinition is particularly important in light of the digital transformation seen in many industries. The outperformers are very seldom diversified across unrelated business segments – although isolated examples such as Bidvest clearly exist in other sectors.
Analysis of the over- and underperformers in the sectors highlighted in the article shows that those following a clear core-based strategy have typically outperformed peers through the initial months of the downturn caused by the Covid-19 outbreak.
Strategy Tools
PODCAST: Effective Transfer Pricing
Our Spotify podcast discusses how to get transfer pricing right.
We discuss effective transfer pricing within organizations, highlighting the prevalent challenges and proposing solutions. The core issue is that poorly implemented internal pricing leads to suboptimal economic decisions, resource allocation problems, and interdepartmental conflict. The hosts advocate for market-based pricing over cost recovery, emphasizing the importance of clear price signals for efficient resource allocation and accurate decision-making. They stress the need for service level agreements, fair cost allocation, and a comprehensive process to manage the political and emotional aspects of internal pricing, ultimately aiming for improved organizational performance and profitability. The podcast includes case studies illustrating successful implementations and the authors’ expertise in this field.
Read more from the original article.

Fast Facts
Fast Fact: Great returns aren’t enough
Key insights
It’s not enough to just have great returns – top-line growth is just as critical.
In fact, S&P 500 investors rewarded high-growth companies more than high-ROIC companies over the past decade.
While the distinction was less clear on the JSE, what is clear is that getting a balance of growth and returns is critical.
Strong and consistent ROIC or RONA performers provide investors with a steady flow of discounted cash flows – without growth effectively a fixed-income instrument.
Improvements in ROIC through margin improvements, efficiencies and working-capital optimisation provide point-in-time uplifts to share price.
Top-line growth presents a compounding mechanism – ROIC (and improvements) are compounded each year leading to on-going increases in share price.
However, without acceptable levels of ROIC, the benefits of compounding will be subdued and share price appreciation will be depressed – and when ROIC is below WACC value will be destroyed.
Maintaining high levels of growth is not as sustainable as maintaining high levels of ROIC – while both typically decline as industries mature, growth is usually more affected.
Getting the right balance between ROIC and growth is critical to optimising shareholder value.
Selected News
Quote: Ilya Sutskever – Safe Superintelligence
“Is the belief really, ‘Oh, it’s so big, but if you had 100x more, everything would be so different?’ It would be different, for sure. But is the belief that if you just 100x the scale, everything would be transformed? I don’t think that’s true. So it’s back to the age of research again, just with big computers.” – Ilya Sutskever – Safe Superintelligence
Ilya Sutskever stands as one of the most influential figures in modern artificial intelligence—a scientist whose work has fundamentally shaped the trajectory of deep learning over the past decade. As co-author of the seminal 2012 AlexNet paper, he helped catalyse the deep learning revolution that transformed machine vision and launched the contemporary AI era. His influence extends through his role as Chief Scientist at OpenAI, where he played a pivotal part in developing GPT-2 and GPT-3, the models that established large-scale language model pre-training as the dominant paradigm in AI research.
In late 2024, Sutskever departed OpenAI and co-founded Safe Superintelligence Inc. (SSI) alongside Daniel Gross and Daniel Levy, positioning the company as the world’s “first straight-shot SSI lab”—an organisation with a single focus: developing safe superintelligence without distraction from product development or revenue generation. The company has since raised $3 billion and reached a $32 billion valuation, reflecting investor confidence in Sutskever’s strategic vision and reputation.
The Context: The Exhaustion of Scaling
Sutskever’s quoted observation emerges from a moment of genuine inflection in AI development. For roughly five years—from 2020 to 2025—the AI industry operated under what he terms the “age of scaling.” This era was defined by a simple, powerful insight: that scaling pre-training data, computational resources, and model parameters yielded predictable improvements in model performance. Organisations could invest capital with low perceived risk, knowing that more compute plus more data plus larger models would reliably produce measurable gains.
This scaling paradigm was extraordinarily productive. It yielded GPT-3, GPT-4, and an entire generation of frontier models that demonstrated capabilities that astonished both researchers and the public. The logic was elegant: if you wanted better AI, you simply scaled the recipe. Sutskever himself was instrumental in validating this approach. The word “scaling” became conceptually magnetic, drawing resources, attention, and organisational focus toward a single axis of improvement.
Yet by 2024–2025, that era began showing clear signs of exhaustion. Data is finite—the amount of high-quality training material available on the internet is not infinite, and organisations are rapidly approaching meaningful constraints on pre-training data supply. Computational resources, whilst vast, are not unlimited, and the economic marginal returns on compute investment have become less obvious. Most critically, the empirical question has shifted: if current frontier labs have access to extraordinary computational resources, would 100 times more compute actually produce a qualitative transformation in capabilities, or merely incremental improvement?
Sutskever’s answer is direct: incremental, not transformative. This reframing is consequential because it redefines where the bottleneck actually lies. The constraint is no longer the ability to purchase more GPUs or accumulate more data. The constraint is ideas—novel technical approaches, new training methodologies, fundamentally different recipes for building AI systems.
The Jaggedness Problem: Theory Meeting Reality
One critical observation animates Sutskever’s thinking: a profound disconnect between benchmark performance and real-world robustness. Current models achieve superhuman performance on carefully constructed evaluation tasks—yet in deployment, they exhibit what Sutskever calls “jagged” behaviour. They repeat errors, introduce new bugs whilst fixing old ones, and cycle between mistakes even when given clear corrective feedback.
This apparent paradox suggests something deeper than mere data or compute insufficiency. It points to inadequate generalisation—the inability to transfer learning from narrow, benchmark-optimised domains into the messy complexity of real-world application. Sutskever frames this through an analogy: a competitive programmer who practises 10,000 hours on competition problems will be highly skilled within that narrow domain but often fails to transfer that knowledge flexibly to broader engineering challenges. Current models, in his assessment, resemble that hyper-specialised competitor rather than the flexible, adaptive learner.
The Core Insight: Generalisation Over Scale
The central thesis animating Sutskever’s work at SSI—and implicit in his quote—is that human-like generalisation and learning efficiency represent a fundamentally different ML principle than scaling, one that has not yet been discovered or operationalised within contemporary AI systems.
Humans learn with orders of magnitude less data than large models yet generalise far more robustly to novel contexts. A teenager learns to drive in roughly ten hours of practice; current AI systems struggle to acquire equivalent robustness with vastly more training data. This is not because humans possess specialised evolutionary priors for driving (a recent activity that evolution could not have optimized for); rather, it suggests humans employ a more general-purpose learning principle that contemporary AI has not yet captured.
Sutskever hypothesises that this principle is connected to what he terms “value functions”—internal mechanisms akin to emotions that provide continuous, intermediate feedback on actions and states, enabling more efficient learning than end-of-trajectory reward signals alone. Evolution appears to have hard-coded robust value functions—emotional and evaluative systems—that make humans viable, adaptive agents across radically different environments. Whether an equivalent principle can be extracted purely from pre-training data, rather than built into learning architecture, remains uncertain.
The Leading Theorists and Related Work
Yann LeCun and Data Efficiency
Yann LeCun, Meta’s Chief AI Scientist and a pioneer of deep learning, has long emphasised the importance of learning efficiency and the role of what he terms “world models” in understanding how agents learn causal structure from limited data. His work highlights that human vision achieves remarkable robustness from developmental data scarcity—children recognise cars after seeing far fewer exemplars than AI systems require—suggesting that the brain employs inductive biases or learning principles that current architectures lack.
Geoffrey Hinton and Neuroscience-Inspired AI
Geoffrey Hinton, winner of the 2024 Nobel Prize in Physics for his work on deep learning, has articulated concerns about AI safety and expressed support for Sutskever’s emphasis on fundamentally rethinking how AI systems learn and align. Hinton’s career-long emphasis on biologically plausible learning mechanisms—from Boltzmann machines to capsule networks—reflects a conviction that important principles for efficient learning remain undiscovered and that neuroscience offers crucial guidance.
Stuart Russell and Alignment Through Uncertainty
Stuart Russell, UC Berkeley’s leading AI safety researcher, has emphasised that robust AI alignment requires systems that remain genuinely uncertain about human values and continue learning from interaction, rather than attempting to encode fixed objectives. This aligns with Sutskever’s thesis that safe superintelligence requires continual learning in deployment rather than monolithic pre-training followed by fixed RL optimisation.
Demis Hassabis and Continual Learning
Demis Hassabis, CEO of DeepMind and a co-developer of AlphaGo, has invested significant research effort into systems that learn continually rather than through discrete training phases. This work recognises that biological intelligence fundamentally involves interaction with environments over time, generating diverse signals that guide learning—a principle SSI appears to be operationalising.
The Paradigm Shift: From Offline to Online Learning
Sutskever’s thinking reflects a broader intellectual shift visible across multiple frontiers of AI research. The dominant pre-training + RL framework assumes a clean separation: a model is trained offline on fixed data, then post-trained with reinforcement learning, then deployed. Increasingly, frontier researchers are questioning whether this separation reflects how learning should actually work.
His articulation of “age of research” signals a return to intellectual plurality and heterodox experimentation—the opposite of the monoculture that scaling paradigm created. When everyone is racing to scale the same recipe, innovation becomes incremental. When new recipes are required, diversity of approach becomes an asset rather than liability.
The Stakes and Implications
This reframing carries significant strategic implications. If the bottleneck is truly ideas rather than compute, then smaller, more cognitively coherent organisations with clear intellectual direction may outpace larger organisations constrained by product commitments, legacy systems, and organisational inertia. If the key innovation is a new training methodology—one that achieves human-like generalisation through different mechanisms—then the first organisation to discover and validate it may enjoy substantial competitive advantage, not through superior resources but through superior understanding.
Equally, this framing challenges the common assumption that AI capability is primarily a function of computational spend. If methodological innovation matters more than scale, the future of AI leadership becomes less a question of capital concentration and more a question of research insight—less about who can purchase the most GPUs, more about who can understand how learning actually works.
Sutskever’s quote thus represents not merely a rhetorical flourish but a fundamental reorientation of strategic thinking about AI development. The age of confident scaling is ending. The age of rigorous research into the principles of generalisation, sample efficiency, and robust learning has begun.

Polls
What determines your success?
We need your help!
We’re testing how people think about their successes and failures. It would be great if you would take 2 minutes to give a simple multiple choice answer and share the poll with your friends.
Take the poll – “What determines your success?”
Then read So You Think You’re Self Aware?
Services
Global Advisors is different
We help clients to measurably improve strategic decision-making and the results they achieve through defining clearly prioritised choices, reducing uncertainty, winning hearts and minds and partnering to deliver.
Our difference is embodied in our team. Our values define us.
Corporate portfolio strategy
Define optimal business portfolios aligned with investor expectations
BUSINESS UNIT STRATEGY
Define how to win against competitors
Reach full potential
Understand your business’ core, reach full potential and grow into optimal adjacencies
Deal advisory
M&A, due diligence, deal structuring, balance sheet optimisation
Global Advisors Digital Data Analytics
14 years of quantitative and data science experience
An enabler to delivering quantified strategy and accelerated implementation
Digital enablement, acceleration and data science
Leading-edge data science and digital skills
Experts in large data processing, analytics and data visualisation
Developers of digital proof-of-concepts
An accelerator for Global Advisors and our clients
Join Global Advisors
We hire and grow amazing people
Consultants join our firm based on a fit with our values, culture and vision. They believe in and are excited by our differentiated approach. They realise that working on our clients’ most important projects is a privilege. While the problems we solve are strategic to clients, consultants recognise that solutions primarily require hard work – rigorous and thorough analysis, partnering with client team members to overcome political and emotional obstacles, and a large investment in knowledge development and self-growth.
Get In Touch
16th Floor, The Forum, 2 Maude Street, Sandton, Johannesburg, South Africa
+27114616371
