Select Page

ARTIFICIAL INTELLIGENCE

An AI-native strategy firm

Global Advisors: a consulting leader in defining quantified strategy, decreasing uncertainty, improving decisions, achieving measureable results.

Learn MoreGlobal Advisors AI

A Different Kind of Partner in an AI World

AI-native strategy
consulting

Experienced hires

We are hiring experienced top-tier strategy consultants

Quantified Strategy

Decreased uncertainty, improved decisions

Global Advisors is a leader in defining quantified strategies, decreasing uncertainty, improving decisions and achieving measureable results.

We specialise in providing highly-analytical data-driven recommendations in the face of significant uncertainty.

We utilise advanced predictive analytics to build robust strategies and enable our clients to make calculated decisions.

We support implementation of adaptive capability and capacity.

Our latest

Thoughts

Podcast – The Real AI Signal from Davos 2026

Podcast – The Real AI Signal from Davos 2026

While the headlines from Davos were dominated by geopolitical conflict and debates on AGI timelines and asset bubbles, a different signal emerged from the noise. It wasn’t about if AI works, but how it is being ruthlessly integrated into the real economy.

In our latest podcast, we break down the “Diffusion Strategy” defining 2026.

3 Key Takeaways:

  1. China and the “Global South” are trying to leapfrog: While the West debates regulation, emerging economies are treating AI as essential infrastructure.
    • China has set a goal for 70% AI diffusion by 2027.
    • The UAE has mandated AI literacy in public schools from K-12.
    • Rwanda is using AI to quadruple its healthcare workforce.
  2. The Rise of the “Agentic Self”: We aren’t just using chatbots anymore; we are employing agents. Entrepreneur Steven Bartlett revealed he has established a “Head of Experimentation and Failure” to use AI to disrupt his own business before competitors do. Musician will.i.am argued that in an age of predictive machines, humans must cultivate their “agentic self” to handle the predictable, while remaining unpredictable themselves.
  3. Rewiring the Core: Uber’s CEO Dara Khosrowshahi noted the difference between an “AI veneer” and a fundamental rewire. It’s no longer about summarising meetings; it’s about autonomous agents resolving customer issues without scripts.

The Global Advisors Perspective: Don’t wait for AGI. The current generation of models is sufficient to drive massive value today. The winners will be those who control their “sovereign capabilities” – embedding their tacit knowledge into models they own.

Read our original perspective here – https://with.ga/w1bd5

Listen to the full breakdown here – https://with.ga/2vg0z
While the headlines from Davos were dominated by geopolitical conflict and debates on AGI timelines and asset bubbles, a different signal emerged from the noise. It wasn't about if AI works, but how it is being ruthlessly integrated into the real economy.

read more

Strategy Tools

PODCAST: Effective Transfer Pricing

PODCAST: Effective Transfer Pricing

Our Spotify podcast discusses how to get transfer pricing right.

We discuss effective transfer pricing within organizations, highlighting the prevalent challenges and proposing solutions. The core issue is that poorly implemented internal pricing leads to suboptimal economic decisions, resource allocation problems, and interdepartmental conflict. The hosts advocate for market-based pricing over cost recovery, emphasizing the importance of clear price signals for efficient resource allocation and accurate decision-making. They stress the need for service level agreements, fair cost allocation, and a comprehensive process to manage the political and emotional aspects of internal pricing, ultimately aiming for improved organizational performance and profitability. The podcast includes case studies illustrating successful implementations and the authors’ expertise in this field.

Read more from the original article.

read more

Fast Facts

Fast Fact: Great returns aren’t enough

Fast Fact: Great returns aren’t enough

Key insights

It’s not enough to just have great returns – top-line growth is just as critical.

In fact, S&P 500 investors rewarded high-growth companies more than high-ROIC companies over the past decade.

While the distinction was less clear on the JSE, what is clear is that getting a balance of growth and returns is critical.

Strong and consistent ROIC or RONA performers provide investors with a steady flow of discounted cash flows – without growth effectively a fixed-income instrument.

Improvements in ROIC through margin improvements, efficiencies and working-capital optimisation provide point-in-time uplifts to share price.

Top-line growth presents a compounding mechanism – ROIC (and improvements) are compounded each year leading to on-going increases in share price.

However, without acceptable levels of ROIC, the benefits of compounding will be subdued and share price appreciation will be depressed – and when ROIC is below WACC value will be destroyed.

Maintaining high levels of growth is not as sustainable as maintaining high levels of ROIC – while both typically decline as industries mature, growth is usually more affected.

Getting the right balance between ROIC and growth is critical to optimising shareholder value.

read more

Selected News

Quote: Professor Hannah Fry – University of Cambridge

Quote: Professor Hannah Fry – University of Cambridge

“Humans are not very good at exponentials. And right now, at this moment, we are standing right on the bend of the curve. AGI is not a distant thought experiment anymore.” – Professor Hannah Fry – Univeristy of Cambridge

The quote comes at the end of a wide?ranging conversation between applied mathematician and broadcaster Professor Hannah Fry and DeepMind co?founder Shane Legg, recorded for the “Google DeepMind, the podcast” series in late 2025. Fry is reflecting on Legg’s decades?long insistence that artificial general intelligence would arrive much sooner than most experts expected, and on his argument that its impact will be structurally comparable to the Industrial Revolution: a technology that reshapes work, wealth, and the basic organisation of society rather than just adding another digital tool. Her remark that “humans are not very good at exponentials” is a pointed reminder of how easily people misread compounding processes, from pandemics to technological progress, and therefore underestimate how quickly “next decade” scenarios can become “this quarter” realities.?

Context of the quote

Fry’s line follows a discussion in which Legg lays out a stepwise picture of AI progress: from today’s uneven but impressive systems, through “minimal AGI” that can reliably perform the full range of ordinary human cognitive tasks, to “full AGI” capable of the most exceptional creative and scientific feats, and then on to artificial superintelligence that eclipses human capability in most domains. Throughout, Legg stresses that current models already exceed humans in language coverage, encyclopaedic knowledge and some kinds of problem solving, while still failing at basic visual reasoning, continual learning, and robust commonsense. The trajectory he sketches is not a gentle slope but a sharpening curve, driven by scaling laws, data, architectures and hardware; Fry’s “bend of the curve” image captures the moment when such a curve stops looking linear to human intuition and starts to feel suddenly, uncomfortably steep.?

That curve is not just about raw capability but about diffusion into the economy. Legg argues that over the next few years, AI will move from being a helpful assistant to doing a growing share of economically valuable work—starting with software engineering and other high?paid cognitive roles that can be done entirely through a laptop. He anticipates that tasks once requiring a hundred engineers might soon be done by a small team amplified by advanced AI tools, with similarly uneven but profound effects across law, finance, research, and other knowledge professions. By the time Fry delivers her closing reflection, the conversation has moved from technical definitions to questions of social contract: how to design a post?AGI economy, how to distribute the gains from machine intelligence, and how to manage the transition period in which disruption and opportunity coexist.?

Hannah Fry: person and perspective

Hannah Fry is a professor in the mathematics of cities who has built a public career explaining complex systems—epidemics, finance, urban dynamics and now AI—to broad audiences. Her training in applied mathematics and complexity science has made her acutely aware of how exponential processes play out in the real world, from contagion curves during COVID?19 to the compounding effect of small percentage gains in algorithmic performance and hardware efficiency. She has repeatedly highlighted the cognitive bias that leads people to underreact when growth is slow and overreact when it becomes visibly explosive, a theme she explicitly connects in this podcast to the early days of the pandemic, when warnings about exponential infection growth were largely ignored while life carried on as normal.?

In the AGI conversation, Fry positions herself as an interpreter between technical insiders and a lay audience that is already experiencing AI in everyday tools but may not yet grasp the systemic implications. Her remark that the general public may, in some sense, “get it” better than domain specialists echoes Legg’s observation that non?experts sometimes see current systems as already effectively “intelligent,” while many professionals in affected fields downplay the relevance of AI to their own work. When she says “AGI is not a distant thought experiment anymore,” she is distilling Legg’s timelines—his long?standing 50/50 prediction of minimal AGI by 2028, followed by full AGI within a decade—into a single, accessible warning that the window for slow institutional adaptation is closing.?

Meaning of “not very good at exponentials”

The specific phrase “humans are not very good at exponentials” draws on a familiar insight from behavioural economics and cognitive psychology: people routinely misjudge exponential growth, treating it as if it were linear. During the COVID?19 pandemic, this manifested in the gap between early warnings about exponential case growth and the public’s continued attendance at large events right up until visible crisis hit, an analogy Fry explicitly invokes in the episode. In technology, the same bias leads organisations to plan as if next year will look like this year plus a small increment, even when underlying drivers—compute, algorithmic innovation, investment, data availability—are compounding at rates that double capabilities over very short horizons.?

Fry’s “bend of the curve” language marks the point where incremental improvements accumulate to the point that qualitative change becomes hard to ignore: AI systems not only answering questions but autonomously writing production code, conducting literature reviews, proposing experiments, or acting as agents in the world. At that bend, the lag between capability and governance becomes a central concern; Legg emphasises that there will not be enough time for leisurely consensus?building once AGI is fully realised, hence his call for every academic discipline and sector—law, education, medicine, city planning, economics—to begin serious scenario work now. Fry’s closing comment translates that call into a general admonition: exponential technologies demand anticipatory thinking, not reactive crisis management.?

Leading theorists behind the ideas

The intellectual backdrop to Fry’s quote and Legg’s perspectives on AGI blends several strands of work in AI theory, safety and the study of technological revolutions.

  • Shane Legg and Ben Goertzel helped revive and popularise the term “artificial general intelligence” in the early 2000s to distinguish systems aimed at broad, human?like cognitive competence from “narrow AI” optimised for specific tasks. Legg’s own academic work, influenced by his supervisor Marcus Hutter, explores formal definitions of universal intelligence and the conditions under which machine systems could match or exceed human problem?solving across many domains.?

  • I. J. Good introduced the “intelligence explosion” hypothesis in 1965, arguing that a sufficiently advanced machine intelligence capable of improving its own design could trigger a runaway feedback loop of ever?greater capability. This notion of recursive self?improvement underpins much of the contemporary discourse about AI timelines and the risks associated with crossing particular capability thresholds.?

  • Eliezer Yudkowsky developed thought experiments and early arguments about AGI’s existential risks, emphasising that misaligned superintelligence could be catastrophically dangerous even if human developers never intended harm. His writing helped seed the modern AI safety movement and influenced researchers and entrepreneurs who later entered mainstream organisations.?

  • Nick Bostrom synthesised and formalised many of these ideas in “Superintelligence: Paths, Dangers, Strategies,” providing widely cited scenarios in which AGI rapidly transitions into systems whose goals and optimisation power outstrip human control. Bostrom’s work is central to Legg’s concern with how to steer AGI safely once it surpasses human intelligence, especially around questions of alignment, control and long?term societal impact.?

  • Geoffrey Hinton, Stuart Russell and other AI pioneers have added their own warnings in recent years: Hinton has drawn parallels between AI and other technologies whose potential harms were recognized only after wide deployment, while Russell has argued for a re?founding of AI as the science of beneficial machines explicitly designed to be uncertain about human preferences. Their perspectives reinforce Legg’s view that questions of ethics, interpretability and “System 2 safety”—ensuring that advanced systems can reason transparently about moral trade?offs—are not peripheral but central to responsible AGI development.?

Together, these theorists frame AGI as both a continuation of a long scientific project to build thinking machines and as a discontinuity in human history whose effects will compound faster than our default intuitions allow. In that context, Fry’s quote reads less as a rhetorical flourish and more as a condensed thesis: exponential dynamics in intelligence technologies are colliding with human cognitive biases and institutional inertia, and the moment to treat AGI as a practical, near?term design problem rather than a speculative future is now.?

References

https://eeg.cl.cam.ac.uk
https://en.wikipedia.org/wiki/Shane_Legg
https://www.youtube.com/watch?v=kMUdrUP-QCs
https://www.ibm.com/think/topics/artificial-general-intelligence
https://kingy.ai/blog/exploring-the-concept-of-artificial-general-intelligence-agi/
https://jetpress.org/v25.2/goertzel.pdf
https://www.dce.va/content/dam/dce/resources/en/digital-cultures/Encountering-AI—Ethical-and-Anthropological-Investigations.pdf
https://arxiv.org/pdf/1707.08476.pdf
https://hermathsstory.eu/author/admin/page/7/
https://www.shunryugarvey.com/wp-content/uploads/2021/03/YISR_I_46_1-2_TEXT_P-1.pdf
https://dash.harvard.edu/bitstream/handle/1/37368915/Nina%20Begus%20Dissertation%20DAC.pdf?sequence=1&isAllowed=y
https://www.facebook.com/groups/lifeboatfoundation/posts/10162407288283455/
https://globaldashboard.org/economics-and-development/
https://www.forbes.com/sites/gilpress/2024/03/29/artificial-general-intelligence-or-agi-a-very-short-history/
https://ebe.uct.ac.za/sites/default/files/content_migration/ebe_uct_ac_za/169/files/WEB%2520UCT%2520CHEM%2520D023%2520Centenary%2520Design.pdf

 

"Humans are not very good at exponentials. And right now, at this moment, we are standing right on the bend of the curve. AGI is not a distant thought experiment anymore." - Quote: Professor Hannah Fry

read more

Polls

Services

Global Advisors is different

We help clients to measurably improve strategic decision-making and the results they achieve through defining clearly prioritised choices, reducing uncertainty, winning hearts and minds and partnering to deliver.

Our difference is embodied in our team. Our values define us.

Corporate portfolio strategy

Define optimal business portfolios aligned with investor expectations

BUSINESS UNIT STRATEGY

Define how to win against competitors

Reach full potential

Understand your business’ core, reach full potential and grow into optimal adjacencies

Deal advisory

M&A, due diligence, deal structuring, balance sheet optimisation

Global Advisors Digital Data Analytics

14 years of quantitative and data science experience

An enabler to delivering quantified strategy and accelerated implementation

Digital enablement, acceleration and data science

Leading-edge data science and digital skills

Experts in large data processing, analytics and data visualisation

Developers of digital proof-of-concepts

An accelerator for Global Advisors and our clients

Join Global Advisors

We hire and grow amazing people

Consultants join our firm based on a fit with our values, culture and vision. They believe in and are excited by our differentiated approach. They realise that working on our clients’ most important projects is a privilege. While the problems we solve are strategic to clients, consultants recognise that solutions primarily require hard work – rigorous and thorough analysis, partnering with client team members to overcome political and emotional obstacles, and a large investment in knowledge development and self-growth.

Get In Touch

16th Floor, The Forum, 2 Maude Street, Sandton, Johannesburg, South Africa
+27114616371

Global Advisors | Quantified Strategy Consulting