Select Page

ARTIFICIAL INTELLIGENCE

An AI-native strategy firm

Global Advisors: a consulting leader in defining quantified strategy, decreasing uncertainty, improving decisions, achieving measureable results.

Learn MoreGlobal Advisors AI

A Different Kind of Partner in an AI World

AI-native strategy
consulting

Experienced hires

We are hiring experienced top-tier strategy consultants

Quantified Strategy

Decreased uncertainty, improved decisions

Global Advisors is a leader in defining quantified strategies, decreasing uncertainty, improving decisions and achieving measureable results.

We specialise in providing highly-analytical data-driven recommendations in the face of significant uncertainty.

We utilise advanced predictive analytics to build robust strategies and enable our clients to make calculated decisions.

We support implementation of adaptive capability and capacity.

Our latest

Thoughts

Podcast – The Real AI Signal from Davos 2026

Podcast – The Real AI Signal from Davos 2026

While the headlines from Davos were dominated by geopolitical conflict and debates on AGI timelines and asset bubbles, a different signal emerged from the noise. It wasn’t about if AI works, but how it is being ruthlessly integrated into the real economy.

In our latest podcast, we break down the “Diffusion Strategy” defining 2026.

3 Key Takeaways:

  1. China and the “Global South” are trying to leapfrog: While the West debates regulation, emerging economies are treating AI as essential infrastructure.
    • China has set a goal for 70% AI diffusion by 2027.
    • The UAE has mandated AI literacy in public schools from K-12.
    • Rwanda is using AI to quadruple its healthcare workforce.
  2. The Rise of the “Agentic Self”: We aren’t just using chatbots anymore; we are employing agents. Entrepreneur Steven Bartlett revealed he has established a “Head of Experimentation and Failure” to use AI to disrupt his own business before competitors do. Musician will.i.am argued that in an age of predictive machines, humans must cultivate their “agentic self” to handle the predictable, while remaining unpredictable themselves.
  3. Rewiring the Core: Uber’s CEO Dara Khosrowshahi noted the difference between an “AI veneer” and a fundamental rewire. It’s no longer about summarising meetings; it’s about autonomous agents resolving customer issues without scripts.

The Global Advisors Perspective: Don’t wait for AGI. The current generation of models is sufficient to drive massive value today. The winners will be those who control their “sovereign capabilities” – embedding their tacit knowledge into models they own.

Read our original perspective here – https://with.ga/w1bd5

Listen to the full breakdown here – https://with.ga/2vg0z
While the headlines from Davos were dominated by geopolitical conflict and debates on AGI timelines and asset bubbles, a different signal emerged from the noise. It wasn't about if AI works, but how it is being ruthlessly integrated into the real economy.

read more

Strategy Tools

Fast Facts

Fast Fact: Great returns aren’t enough

Fast Fact: Great returns aren’t enough

Key insights

It’s not enough to just have great returns – top-line growth is just as critical.

In fact, S&P 500 investors rewarded high-growth companies more than high-ROIC companies over the past decade.

While the distinction was less clear on the JSE, what is clear is that getting a balance of growth and returns is critical.

Strong and consistent ROIC or RONA performers provide investors with a steady flow of discounted cash flows – without growth effectively a fixed-income instrument.

Improvements in ROIC through margin improvements, efficiencies and working-capital optimisation provide point-in-time uplifts to share price.

Top-line growth presents a compounding mechanism – ROIC (and improvements) are compounded each year leading to on-going increases in share price.

However, without acceptable levels of ROIC, the benefits of compounding will be subdued and share price appreciation will be depressed – and when ROIC is below WACC value will be destroyed.

Maintaining high levels of growth is not as sustainable as maintaining high levels of ROIC – while both typically decline as industries mature, growth is usually more affected.

Getting the right balance between ROIC and growth is critical to optimising shareholder value.

read more

Selected News

Term: AI scaffolding

Term: AI scaffolding

“Scaffolding refers to the structured architecture and instructional techniques built around an AI model to enhance its reasoning, reliability, and capability.” – AI scaffolding

AI scaffolding is the structured architecture and tooling built around a large language model (LLM) to enable it to perform complex, goal-driven tasks with enhanced reasoning, reliability, and capability.1 Rather than relying on a single prompt or query, scaffolding places an LLM within a control loop that includes memory systems, external tools, decision logic, and feedback mechanisms, allowing the model to observe its environment, call APIs or code, update its context, and iterate until goals are achieved.1

In essence, scaffolding bridges the critical gap between the capabilities of base models and production-ready systems. A standalone LLM lacks the architectural support needed to reliably complete multi-step tasks, interface with business systems, or adapt to domain-specific requirements.1 Scaffolding augments the model’s bare capabilities by providing access to tools, domain data, and structured workflows that guide and extend its behaviour.

Core Components of AI Scaffolding

Effective scaffolding operates through several interconnected layers:

  • Planning and reasoning: Agents operate through defined reasoning and evaluation steps. Rather than acting immediately, scaffolding may prompt the model to plan or reflect before taking action, and to self-critique its outputs. Research demonstrates that allowing agents to plan and self-evaluate significantly improves problem-solving accuracy compared to action-only approaches.1
  • Tool integration: The LLM is wrapped in code that interprets its outputs as tool calls. When the model determines it needs external resources-such as a calculator, database query, API call, or web search-the scaffold safely executes that tool and returns results to the model for the next reasoning step.1
  • Memory systems: Scaffolding includes mechanisms for the agent to maintain and update context across multiple interactions, enabling it to build upon previous observations and decisions.1
  • Feedback and control: Robust agents include feedback loops and safeguards such as self-evaluation steps, human-in-the-loop checks, and policy enforcement. In enterprise settings, scaffolding adds logging, testing suites, and guardrails like content filters to ensure outputs remain controlled and auditable.1

Types of AI Scaffolding Techniques

AI scaffolding encompasses several distinct approaches, which can be combined to enhance model performance:

  • Tool access scaffolding: Granting models access to external tools such as code editors, web browsers, or specialised software significantly expands their problem-solving capabilities. For example, LLMs initially trained on finite datasets with fixed cut-off dates became substantially more capable when granted internet access.2
  • Agent loop scaffolding: This technique automates multi-step task completion by placing AI models in a loop with access to their own observations and actions, enabling them to self-generate each prompt needed to finish complex tasks. Systems like AutoGPT exemplify this approach.2
  • Multi-agent scaffolding: Multiple AI models collaborate on complex problems through dialogue, division of labour, or critique mechanisms. Research shows that extended networks of up to a thousand agents can coordinate to outperform individual models, with capability scaling predictably as networks grow larger.2
  • Procedural scaffolding: This approach builds a structured process in which the model generates outputs, checks them, and revises them iteratively, enforcing process discipline rather than relying on raw prompts alone.3
  • Semantic scaffolding: Using ontological frameworks and domain rules to validate outputs against formal relations, preventing deeper misunderstandings and moving AI closer to auditable, trustworthy reasoning.3

Practical Applications and Enterprise Use

Scaffolding is essential for operationalising LLMs in enterprise environments. Whether an agent is expected to generate structured outputs, interact with APIs, or solve problems through planning and iteration, its effectiveness depends on the scaffold that guides and extends its behaviour.1 In sectors such as customer service, risk analysis, logistics, healthcare, and finance, scaffolding enables AI systems to maintain reliability and auditability in high-stakes contexts.3

A key advantage of scaffolding is that it improves accuracy whilst making AI reasoning more transparent. When a system reaches a conclusion, leaders can trace it back to formal relations in an ontology rather than relying solely on statistical inference, making the system trustworthy for critical applications.3

Scaffolding versus Model Scale

An important principle in modern AI development is that scaffolding often matters more than raw model scale. The future of AI-whether in homeland security, finance, healthcare, or other domains-will be defined not by the size of models but by the quality of the architectural frameworks surrounding them.3 Hybrid architectures that embed statistical models within well-designed scaffolded systems deliver superior performance and reliability compared to simply scaling larger models without structural support.

Key Theorist: Stuart Russell and the Alignment Research Tradition

The conceptual foundations of AI scaffolding are deeply rooted in the work of Stuart Russell, a leading figure in artificial intelligence safety and alignment research. Russell, the Volgenau Chair of Engineering at the University of California, Berkeley, and co-author of the seminal textbook Artificial Intelligence: A Modern Approach, has been instrumental in developing frameworks for ensuring AI systems remain controllable and aligned with human values as they become more capable.

Russell’s contributions to scaffolding theory emerge from his broader research agenda on AI safety and the control problem. In the early 2000s, as machine learning systems began to demonstrate increasing autonomy, Russell recognised that simply building more powerful models without corresponding advances in control architecture would create dangerous misalignment between AI capabilities and human oversight. His work emphasised that the architecture surrounding an AI system-not merely the model itself-determines whether that system can be safely deployed in high-stakes environments.

One of Russell’s most influential contributions to scaffolding concepts is his work on iterated amplification, developed in collaboration with researchers at OpenAI and other institutions. Iterated amplification is a form of scaffolding that uses multi-AI collaborations to solve increasingly complex problems whilst maintaining human oversight at each stage. In this approach, humans decompose complex tasks into simpler subtasks that AI systems solve, then humans review and synthesise these solutions. Over time, humans operate at progressively higher levels of abstraction whilst AI systems assume responsibility for more of the process. This iterative cycle improves model capabilities whilst preserving human auditability and control-a principle directly aligned with scaffolding’s core objective.2

Russell’s broader philosophical stance is that AI safety and capability enhancement are not opposing forces but complementary objectives. Scaffolding embodies this principle: by building structured architectures around models, developers simultaneously enhance capability (through tool access, planning, and feedback loops) and improve safety (through auditability, human-in-the-loop checks, and formal validation against domain rules). Russell’s insistence that AI systems must remain interpretable and auditable has directly influenced how modern scaffolding frameworks incorporate semantic validation, ontological constraints, and transparent reasoning pathways.

Throughout his career, Russell has advocated for what he terms “beneficial AI”-systems designed from inception to be controllable, transparent, and aligned with human values. Scaffolding represents a practical instantiation of this vision. Rather than hoping that larger models will somehow become more trustworthy, Russell’s framework suggests that intentional architectural design-the very essence of scaffolding-is the path to AI systems that are simultaneously more capable and more reliable.

Russell’s influence extends beyond theoretical work. His research group at Berkeley has contributed to developing practical frameworks for AI governance, model evaluation, and safety testing that directly inform how organisations implement scaffolding in production environments. His emphasis on formal methods, constraint satisfaction, and human-AI collaboration has shaped industry standards for building enterprise-grade AI systems.

References

1. https://zbrain.ai/agent-scaffolding/

2. https://blog.bluedot.org/p/what-is-ai-scaffolding

3. https://www.cio.com/article/4076515/beyond-ai-prompts-why-scaffolding-matters-more-than-scale.html

4. https://www.godofprompt.ai/blog/what-is-prompt-scaffolding

5. https://kpcrossacademy.ua.edu/scaffolding-ai-as-a-learning-collaborator-integrating-artificial-intelligence-in-college-classes/

6. https://www.tandfonline.com/doi/full/10.1080/10494820.2025.2470319

"Scaffolding refers to the structured architecture and instructional techniques built around an AI model to enhance its reasoning, reliability, and capability." - Term: AI scaffolding

read more

Polls

Services

Global Advisors is different

We help clients to measurably improve strategic decision-making and the results they achieve through defining clearly prioritised choices, reducing uncertainty, winning hearts and minds and partnering to deliver.

Our difference is embodied in our team. Our values define us.

Corporate portfolio strategy

Define optimal business portfolios aligned with investor expectations

BUSINESS UNIT STRATEGY

Define how to win against competitors

Reach full potential

Understand your business’ core, reach full potential and grow into optimal adjacencies

Deal advisory

M&A, due diligence, deal structuring, balance sheet optimisation

Global Advisors Digital Data Analytics

14 years of quantitative and data science experience

An enabler to delivering quantified strategy and accelerated implementation

Digital enablement, acceleration and data science

Leading-edge data science and digital skills

Experts in large data processing, analytics and data visualisation

Developers of digital proof-of-concepts

An accelerator for Global Advisors and our clients

Join Global Advisors

We hire and grow amazing people

Consultants join our firm based on a fit with our values, culture and vision. They believe in and are excited by our differentiated approach. They realise that working on our clients’ most important projects is a privilege. While the problems we solve are strategic to clients, consultants recognise that solutions primarily require hard work – rigorous and thorough analysis, partnering with client team members to overcome political and emotional obstacles, and a large investment in knowledge development and self-growth.

Get In Touch

16th Floor, The Forum, 2 Maude Street, Sandton, Johannesburg, South Africa
+27114616371

Global Advisors | Quantified Strategy Consulting