ARTIFICIAL INTELLIGENCE
An AI-native strategy firmGlobal Advisors: a consulting leader in defining quantified strategy, decreasing uncertainty, improving decisions, achieving measureable results.
A Different Kind of Partner in an AI World
AI-native strategy
consulting
Experienced hires
We are hiring experienced top-tier strategy consultants
Quantified Strategy
Decreased uncertainty, improved decisions
Global Advisors is a leader in defining quantified strategies, decreasing uncertainty, improving decisions and achieving measureable results.
We specialise in providing highly-analytical data-driven recommendations in the face of significant uncertainty.
We utilise advanced predictive analytics to build robust strategies and enable our clients to make calculated decisions.
We support implementation of adaptive capability and capacity.
Our latest
Thoughts
Global Advisors’ Thoughts: Is insecurity behind that dysfunction?
By Marc Wilson
Marc is a partner at Global Advisors and based in Johannesburg, South Africa
Download this article at http://www.globaladvisors.biz/inc-feed/20170907/thoughts-is-insecurity-behind-that-dysfunction
We tend to characterise insecurity as what we see in overtly fragile, shy and awkward people. We think that their insecurity presents as lack of confidence. And often we associate it with under-achievement.
Sometimes we might be aware that insecurities can lie behind the -ias, -isms and the phobias. Body dysmorphia? Insecurity about attractiveness. Racism? Often the need to find security by claiming superiority, belonging to group with power, a group you understand and whose acceptance you want. Homophobia? Often insecurity about one’s own sexuality or masculinity / feminity.
So it is often counter-intuitive when we discover that often behind incredible success lies – insecurity! In fact, an article I once read described the successful elite of strategy consulting firms as typically “insecure over-achievers.”
Insecurity must be one of the most misunderstood drivers of dysfunction. Instead we see its related symptoms and react to those. “That woman is so overbearing. That guy is so aggressive! That girl is so self-absorbed. That guy is so competitive.” Even, “That guy is so arrogant.”
How is it that someone we might perceive as competitive, arrogant or overconfident might be insecure? Sometimes people overcompensate to hide a weakness or insecurity. Sometimes in an effort to avoid feeling defensive of a perceived shortcoming, they might go on the offensive – telling people they are the opposite or even faking security.
Do we even know what insecurity is? The very need to…
Read the rest of “Power, Control and Space” at http://www.globaladvisors.biz/inc-feed/20170907/thoughts-is-insecurity-behind-that-dysfunction
Strategy Tools
Strategy tools: Effective transfer pricing
So much has been written about transfer pricing. Yet it remains a bone of contention in almost every organisation. Transfer pricing is not merely a rational challenge – it often raises the emotions of internal service users and providers who argue regarding scope, quality, price and value.
We have found that effective transfer pricing relies on some fairly simple best practices and critical success factors.
Many organisations recover costs as a regular ‘below-the-line’ deduction from operating division income statements. In our experience, charge out is almost always preferable. This results in internal value judgements and negotiation regarding delivery happening closer to time of use.
We have typically seen that the realisation that internal pricing plays this role and the consequences of poor implementation are not well understood.
Results of poor transfer pricing implementation
Sub-optimal economic use decisions
Where costs / prices are higher than they should be, buyers pass this on as an inflated cost to their customers, experience margin squeeze, or utilise less of the service than they might have.
Strategically this can lead to incorrect decisions regarding the provision of services to the market and loss of market share.
Where costs / prices are lower than they should be, this can lead to overuse of a product or service and poor cost recovery from external customers.
Strategically this can result in the over promotion and sales of products and services that are achieving lower margins than thought, or that might even be making losses.
Sub-optimal investment and resourcing decisions
Incorrect pricing can lead to over- or under-investment in capacity and product or service quality. Further, the resourcing decisions will be incorrect should the price signal to the supplier be incorrect.
Political and emotional argument
Where buyers are unable to obtain assurance that an internal price is correct, there is typically resentment regarding the cost of the internal product and service and the sheltered position employees of the internal service provider occupy – in the buyer’s eyes free from commercial pressures.
Buyers and suppliers typically also argue regarding the quality of the service or product relative to the price paid.
Suppliers may react to criticism claiming their product or service is strategic in nature and refute its availability in the external markets.
Poor product / service quality
Poor price signals will result in lack of comparable product and service quality benchmarks. This can result in ‘gold-plating’ or poor-quality product and service provision.
Read more at https://globaladvisors.biz/2021/01/06/strategy-tools-effective-transfer-pricing/
Fast Facts
Fast Fact: The rate of technology adoption exploded in the 1990s
The 1990s were an inflection point in the adoption of new technologies. While radio showed fast adoption in the 1920s, new technologies introduced post 2010 had reached penetrations of more than 30% of the United States population within 3 years from launch. PCs...
Selected News
Quote: Ilya Sutskever – Safe Superintelligence
“Is the belief really, ‘Oh, it’s so big, but if you had 100x more, everything would be so different?’ It would be different, for sure. But is the belief that if you just 100x the scale, everything would be transformed? I don’t think that’s true. So it’s back to the age of research again, just with big computers.” – Ilya Sutskever – Safe Superintelligence
Ilya Sutskever stands as one of the most influential figures in modern artificial intelligence—a scientist whose work has fundamentally shaped the trajectory of deep learning over the past decade. As co-author of the seminal 2012 AlexNet paper, he helped catalyse the deep learning revolution that transformed machine vision and launched the contemporary AI era. His influence extends through his role as Chief Scientist at OpenAI, where he played a pivotal part in developing GPT-2 and GPT-3, the models that established large-scale language model pre-training as the dominant paradigm in AI research.
In late 2024, Sutskever departed OpenAI and co-founded Safe Superintelligence Inc. (SSI) alongside Daniel Gross and Daniel Levy, positioning the company as the world’s “first straight-shot SSI lab”—an organisation with a single focus: developing safe superintelligence without distraction from product development or revenue generation. The company has since raised $3 billion and reached a $32 billion valuation, reflecting investor confidence in Sutskever’s strategic vision and reputation.
The Context: The Exhaustion of Scaling
Sutskever’s quoted observation emerges from a moment of genuine inflection in AI development. For roughly five years—from 2020 to 2025—the AI industry operated under what he terms the “age of scaling.” This era was defined by a simple, powerful insight: that scaling pre-training data, computational resources, and model parameters yielded predictable improvements in model performance. Organisations could invest capital with low perceived risk, knowing that more compute plus more data plus larger models would reliably produce measurable gains.
This scaling paradigm was extraordinarily productive. It yielded GPT-3, GPT-4, and an entire generation of frontier models that demonstrated capabilities that astonished both researchers and the public. The logic was elegant: if you wanted better AI, you simply scaled the recipe. Sutskever himself was instrumental in validating this approach. The word “scaling” became conceptually magnetic, drawing resources, attention, and organisational focus toward a single axis of improvement.
Yet by 2024–2025, that era began showing clear signs of exhaustion. Data is finite—the amount of high-quality training material available on the internet is not infinite, and organisations are rapidly approaching meaningful constraints on pre-training data supply. Computational resources, whilst vast, are not unlimited, and the economic marginal returns on compute investment have become less obvious. Most critically, the empirical question has shifted: if current frontier labs have access to extraordinary computational resources, would 100 times more compute actually produce a qualitative transformation in capabilities, or merely incremental improvement?
Sutskever’s answer is direct: incremental, not transformative. This reframing is consequential because it redefines where the bottleneck actually lies. The constraint is no longer the ability to purchase more GPUs or accumulate more data. The constraint is ideas—novel technical approaches, new training methodologies, fundamentally different recipes for building AI systems.
The Jaggedness Problem: Theory Meeting Reality
One critical observation animates Sutskever’s thinking: a profound disconnect between benchmark performance and real-world robustness. Current models achieve superhuman performance on carefully constructed evaluation tasks—yet in deployment, they exhibit what Sutskever calls “jagged” behaviour. They repeat errors, introduce new bugs whilst fixing old ones, and cycle between mistakes even when given clear corrective feedback.
This apparent paradox suggests something deeper than mere data or compute insufficiency. It points to inadequate generalisation—the inability to transfer learning from narrow, benchmark-optimised domains into the messy complexity of real-world application. Sutskever frames this through an analogy: a competitive programmer who practises 10,000 hours on competition problems will be highly skilled within that narrow domain but often fails to transfer that knowledge flexibly to broader engineering challenges. Current models, in his assessment, resemble that hyper-specialised competitor rather than the flexible, adaptive learner.
The Core Insight: Generalisation Over Scale
The central thesis animating Sutskever’s work at SSI—and implicit in his quote—is that human-like generalisation and learning efficiency represent a fundamentally different ML principle than scaling, one that has not yet been discovered or operationalised within contemporary AI systems.
Humans learn with orders of magnitude less data than large models yet generalise far more robustly to novel contexts. A teenager learns to drive in roughly ten hours of practice; current AI systems struggle to acquire equivalent robustness with vastly more training data. This is not because humans possess specialised evolutionary priors for driving (a recent activity that evolution could not have optimized for); rather, it suggests humans employ a more general-purpose learning principle that contemporary AI has not yet captured.
Sutskever hypothesises that this principle is connected to what he terms “value functions”—internal mechanisms akin to emotions that provide continuous, intermediate feedback on actions and states, enabling more efficient learning than end-of-trajectory reward signals alone. Evolution appears to have hard-coded robust value functions—emotional and evaluative systems—that make humans viable, adaptive agents across radically different environments. Whether an equivalent principle can be extracted purely from pre-training data, rather than built into learning architecture, remains uncertain.
The Leading Theorists and Related Work
Yann LeCun and Data Efficiency
Yann LeCun, Meta’s Chief AI Scientist and a pioneer of deep learning, has long emphasised the importance of learning efficiency and the role of what he terms “world models” in understanding how agents learn causal structure from limited data. His work highlights that human vision achieves remarkable robustness from developmental data scarcity—children recognise cars after seeing far fewer exemplars than AI systems require—suggesting that the brain employs inductive biases or learning principles that current architectures lack.
Geoffrey Hinton and Neuroscience-Inspired AI
Geoffrey Hinton, winner of the 2024 Nobel Prize in Physics for his work on deep learning, has articulated concerns about AI safety and expressed support for Sutskever’s emphasis on fundamentally rethinking how AI systems learn and align. Hinton’s career-long emphasis on biologically plausible learning mechanisms—from Boltzmann machines to capsule networks—reflects a conviction that important principles for efficient learning remain undiscovered and that neuroscience offers crucial guidance.
Stuart Russell and Alignment Through Uncertainty
Stuart Russell, UC Berkeley’s leading AI safety researcher, has emphasised that robust AI alignment requires systems that remain genuinely uncertain about human values and continue learning from interaction, rather than attempting to encode fixed objectives. This aligns with Sutskever’s thesis that safe superintelligence requires continual learning in deployment rather than monolithic pre-training followed by fixed RL optimisation.
Demis Hassabis and Continual Learning
Demis Hassabis, CEO of DeepMind and a co-developer of AlphaGo, has invested significant research effort into systems that learn continually rather than through discrete training phases. This work recognises that biological intelligence fundamentally involves interaction with environments over time, generating diverse signals that guide learning—a principle SSI appears to be operationalising.
The Paradigm Shift: From Offline to Online Learning
Sutskever’s thinking reflects a broader intellectual shift visible across multiple frontiers of AI research. The dominant pre-training + RL framework assumes a clean separation: a model is trained offline on fixed data, then post-trained with reinforcement learning, then deployed. Increasingly, frontier researchers are questioning whether this separation reflects how learning should actually work.
His articulation of “age of research” signals a return to intellectual plurality and heterodox experimentation—the opposite of the monoculture that scaling paradigm created. When everyone is racing to scale the same recipe, innovation becomes incremental. When new recipes are required, diversity of approach becomes an asset rather than liability.
The Stakes and Implications
This reframing carries significant strategic implications. If the bottleneck is truly ideas rather than compute, then smaller, more cognitively coherent organisations with clear intellectual direction may outpace larger organisations constrained by product commitments, legacy systems, and organisational inertia. If the key innovation is a new training methodology—one that achieves human-like generalisation through different mechanisms—then the first organisation to discover and validate it may enjoy substantial competitive advantage, not through superior resources but through superior understanding.
Equally, this framing challenges the common assumption that AI capability is primarily a function of computational spend. If methodological innovation matters more than scale, the future of AI leadership becomes less a question of capital concentration and more a question of research insight—less about who can purchase the most GPUs, more about who can understand how learning actually works.
Sutskever’s quote thus represents not merely a rhetorical flourish but a fundamental reorientation of strategic thinking about AI development. The age of confident scaling is ending. The age of rigorous research into the principles of generalisation, sample efficiency, and robust learning has begun.

Polls
No Results Found
The page you requested could not be found. Try refining your search, or use the navigation above to locate the post.
Services
Global Advisors is different
We help clients to measurably improve strategic decision-making and the results they achieve through defining clearly prioritised choices, reducing uncertainty, winning hearts and minds and partnering to deliver.
Our difference is embodied in our team. Our values define us.
Corporate portfolio strategy
Define optimal business portfolios aligned with investor expectations
BUSINESS UNIT STRATEGY
Define how to win against competitors
Reach full potential
Understand your business’ core, reach full potential and grow into optimal adjacencies
Deal advisory
M&A, due diligence, deal structuring, balance sheet optimisation
Global Advisors Digital Data Analytics
14 years of quantitative and data science experience
An enabler to delivering quantified strategy and accelerated implementation
Digital enablement, acceleration and data science
Leading-edge data science and digital skills
Experts in large data processing, analytics and data visualisation
Developers of digital proof-of-concepts
An accelerator for Global Advisors and our clients
Join Global Advisors
We hire and grow amazing people
Consultants join our firm based on a fit with our values, culture and vision. They believe in and are excited by our differentiated approach. They realise that working on our clients’ most important projects is a privilege. While the problems we solve are strategic to clients, consultants recognise that solutions primarily require hard work – rigorous and thorough analysis, partnering with client team members to overcome political and emotional obstacles, and a large investment in knowledge development and self-growth.
Get In Touch
16th Floor, The Forum, 2 Maude Street, Sandton, Johannesburg, South Africa
+27114616371
