ARTIFICIAL INTELLIGENCE
An AI-native strategy firmGlobal Advisors: a consulting leader in defining quantified strategy, decreasing uncertainty, improving decisions, achieving measureable results.
A Different Kind of Partner in an AI World
AI-native strategy
consulting
Experienced hires
We are hiring experienced top-tier strategy consultants
Quantified Strategy
Decreased uncertainty, improved decisions
Global Advisors is a leader in defining quantified strategies, decreasing uncertainty, improving decisions and achieving measureable results.
We specialise in providing highly-analytical data-driven recommendations in the face of significant uncertainty.
We utilise advanced predictive analytics to build robust strategies and enable our clients to make calculated decisions.
We support implementation of adaptive capability and capacity.
Our latest
Thoughts
Global Advisors’ Thoughts: Leading a deliberate life
By Marc Wilson
Marc is a partner at Global Advisors and based in Johannesburg, South Africa
Download this article at https://globaladvisors.biz/blog/2018/06/26/leading-a-deliberate-life/.
Picket fences. Family of four. Management position.
Mid-life crisis. Meaning. Purpose.
Someone once said that, “At 18, I had all the answers. At 35, I realised I didn’t know the question.”
Serendipity has a lot going for it. Many people might sail through life taking what comes and enjoying the moment. Others might be open to chance and have nothing go right for them.
Some people might strive to achieve, realise rare successes and be bitterly unhappy. Others might be driven and enjoy incredible success and fulfilment.
Perhaps the majority of us become beholden to the momentum of our lives.
We might study, start a career, marry, buy a dream house, have children, send them to a top school. Those steps make up components of many of our dreams. They are steps that may define each subsequent choice. As I discussed this with a friend recently, he remarked that few of these steps had been subject of deliberations in his life – increasingly these steps were the outcome of momentum. Each will shape every step he takes for the rest of his life. He would not have things any other way, but if he knew what he knows now, he might have been more deliberate about choice and consequence…..
Read more at https://globaladvisors.biz/blog/2018/06/26/leading-a-deliberate-life/
.
Strategy Tools
PODCAST: Strategy Tools: Growth, Profit or Returns?
Our Spotify podcast explores the relationship between Return on Net Assets (RONA) and growth, arguing that both are essential for shareholder value creation. The hosts contend that focusing solely on one metric can be detrimental, and propose a framework for evaluating business portfolios based on their RONA and growth profiles. This approach involves plotting business units on a “market-cap curve” to identify value-accretive and value-destructive segments.
The podcast also addresses the impact of economic downturns on portfolio management, suggesting strategies for both offensive and defensive approaches. The core argument is that companies should aim to achieve a balance between RONA and growth, acknowledging that both are essential for long-term shareholder value creation.
Read more from the original article – https://globaladvisors.biz/2020/08/04/strategy-tools-growth-profit-or-returns/

Fast Facts
Fast Fact: The rate of technology adoption exploded in the 1990s
The 1990s were an inflection point in the adoption of new technologies. While radio showed fast adoption in the 1920s, new technologies introduced post 2010 had reached penetrations of more than 30% of the United States population within 3 years from launch. PCs...
Selected News
Term: Model density
“Model density” in AI, particularly regarding LLMs, is a performance-efficiency metric defined as the ratio of a model’s effective capability (performance) to its total parameter size.” – Model density
Model density represents a fundamental shift in how we measure artificial intelligence performance, moving beyond raw computational power to assess how effectively a model utilises its parameters. Rather than simply counting the number of parameters in a neural network, model density quantifies the ratio of effective capability to total parameter count, revealing how intelligently a model has been trained and architected.3
The Core Concept
At its essence, model density answers a critical question: how much useful intelligence does each parameter contribute? This metric emerged from the recognition that newer models achieve superior performance with fewer parameters than their predecessors, suggesting that progress in large language models stems not merely from scaling size, but from improving architecture, training data quality, and algorithmic efficiency.3
The concept can be understood through what researchers call capability density, formally defined as the ratio of a model’s effective parameter count to its actual parameter count.3 The effective parameter count is estimated by fitting scaling laws to existing models and determining how large a reference model would need to be to match current performance. When this ratio exceeds 1.0, it indicates that a model performs better than expected for its size-a hallmark of efficient design.
Information Compression and the “Great Squeeze”
Model density becomes particularly illuminating when examined through the lens of information compression. Modern large language models achieve remarkable density through what has been termed “the Great Squeeze”-the process of compressing vast training datasets into mathematical representations.1
Consider the Llama 3 family as a concrete example. During training, the model encountered approximately 15 trillion tokens of information. If stored in a traditional database, this would require 15 to 20 terabytes of raw data. The resulting Llama 3 70B model, however, contains only 70 billion parameters with a final weight of roughly 140 gigabytes-representing a 100:1 reduction in physical size.1 This translates to a squeeze ratio where each parameter has “seen” over 200 different tokens of information during training.1
The smaller Llama 3 8B model demonstrates even more extreme density, compressing 15 trillion tokens into 8 billion parameters-a ratio of nearly 1,875 tokens per parameter.1 This extreme over-training paradoxically enables superior reasoning capabilities, as the higher density of learned experience per parameter allows the model to extract more nuanced patterns from its training data.
Semantic Density and Output Reliability
Beyond parameter efficiency, model density extends to the quality and consistency of outputs. Semantic density measures the confidence level of an LLM’s response by analysing how probable and semantically consistent the generated answer is.2 This metric evaluates how well each answer aligns with alternative responses and the query’s overall context, functioning as a post-processing step that requires no retraining or fine-tuning.2
High semantic density indicates strong understanding of a topic and internal consistency, resulting in more reliable outputs.2 This proves particularly valuable given that LLMs lack built-in confidence measures and can produce outputs that sound authoritative even when incorrect or misleading.5 By generating multiple responses and computing confidence scores between 0 and 1, semantic density identifies responses located in denser regions of output semantic space-and therefore more trustworthy.5
Intelligence Density in Practical Application
Beyond parameter ratios, practitioners increasingly focus on intelligence density as the amount of useful intelligence produced per unit of time or computational resource.4 This reframing acknowledges that once models achieve sufficient peak intelligence for their intended tasks, the primary constraint shifts from maximum capability to the density of intelligence they can produce.4 In customer support and similar domains, this means optimising the quantity of intelligence produced per second becomes more valuable than pursuing ever-higher peak performance.4
This principle reveals that high-enough peak intelligence is necessary but not sufficient; once achieved, value creation moves towards latency and density optimisation, where significant opportunities for differentiation remain under-explored and are cheaper to capture.4
The Exponential Progress Trend
Research indicates that the best-performing models at each time point show rising capability density, with newer models achieving given performance levels with fewer parameters than older models.3 This trend appears approximately exponential over time, suggesting that progress in large language models is fundamentally about improving efficiency rather than simply scaling up.3 This observation underscores that tracking parameter efficiency is essential for understanding future directions in natural language processing and machine learning.
Related Theorist: Ilya Sutskever and Scaling Laws
The theoretical foundations of model density connect deeply to the work of Ilya Sutskever, Chief Scientist at OpenAI and a pioneering researcher in understanding how neural networks scale. Sutskever’s research on scaling laws-particularly his work demonstrating predictable relationships between model size, data size, and performance-provided the mathematical framework upon which modern density metrics rest.
Born in 1986 in Yegoryevsk, Russia, Sutskever emigrated to Canada as a child and developed an early passion for artificial intelligence. He completed his PhD at the University of Toronto under Geoffrey Hinton, one of the founding figures of deep learning, where he focused on understanding the principles governing neural network training and optimisation.
Sutskever’s seminal work on scaling laws, conducted whilst at OpenAI alongside researchers including Jared Kaplan, revealed that model performance follows predictable power-law relationships with respect to compute, data, and model size.3 These discoveries fundamentally changed how the field approaches model development. Rather than viewing larger models as inherently better, Sutskever’s work demonstrated that the efficiency with which a model uses its parameters matters profoundly.
His research established that progress in AI is not merely about building bigger models, but about understanding and optimising the relationship between parameters and capability-the very essence of model density. Sutskever’s theoretical contributions directly enabled the concept of capability density, as researchers could now quantify how much “effective” capacity a model possessed relative to its actual parameter count. His work demonstrated that architectural innovations, superior training algorithms, and higher-quality data could yield models that achieve better performance with fewer parameters, validating the principle that density-not size-drives progress.
Sutskever’s influence extends beyond scaling laws to shaping how the entire field conceptualises model efficiency. His emphasis on understanding the mathematical principles underlying neural network training rather than pursuing brute-force scaling has become increasingly relevant as computational costs and environmental concerns make parameter efficiency paramount. In this sense, model density represents the practical realisation of Sutskever’s theoretical insights: the recognition that intelligent design and efficient parameter utilisation outweigh raw computational scale.
References
1. https://dentro.de/ai/blog/2025/12/20/the-great-squeeze—understanding-llm-information-density/
2. https://www.geekytech.co.uk/semantic-density-and-its-impact-on-llm-ranking/
3. https://research.aimultiple.com/llm-scaling-laws/
4. https://fin.ai/research/we-dont-need-higher-peak-intelligence-only-more-intelligence-density/
5. https://www.cognizant.com/us/en/ai-lab/blog/semantic-density-demo
6. https://www.educationdynamics.com/ai-density-in-search-marketing/
7. https://pub.towardsai.net/the-generative-ai-model-map-fff0b6490f77

Polls
No Results Found
The page you requested could not be found. Try refining your search, or use the navigation above to locate the post.
Services
Global Advisors is different
We help clients to measurably improve strategic decision-making and the results they achieve through defining clearly prioritised choices, reducing uncertainty, winning hearts and minds and partnering to deliver.
Our difference is embodied in our team. Our values define us.
Corporate portfolio strategy
Define optimal business portfolios aligned with investor expectations
BUSINESS UNIT STRATEGY
Define how to win against competitors
Reach full potential
Understand your business’ core, reach full potential and grow into optimal adjacencies
Deal advisory
M&A, due diligence, deal structuring, balance sheet optimisation
Global Advisors Digital Data Analytics
14 years of quantitative and data science experience
An enabler to delivering quantified strategy and accelerated implementation
Digital enablement, acceleration and data science
Leading-edge data science and digital skills
Experts in large data processing, analytics and data visualisation
Developers of digital proof-of-concepts
An accelerator for Global Advisors and our clients
Join Global Advisors
We hire and grow amazing people
Consultants join our firm based on a fit with our values, culture and vision. They believe in and are excited by our differentiated approach. They realise that working on our clients’ most important projects is a privilege. While the problems we solve are strategic to clients, consultants recognise that solutions primarily require hard work – rigorous and thorough analysis, partnering with client team members to overcome political and emotional obstacles, and a large investment in knowledge development and self-growth.
Get In Touch
16th Floor, The Forum, 2 Maude Street, Sandton, Johannesburg, South Africa
+27114616371
