ARTIFICIAL INTELLIGENCE
An AI-native strategy firmGlobal Advisors: a consulting leader in defining quantified strategy, decreasing uncertainty, improving decisions, achieving measureable results.
A Different Kind of Partner in an AI World
AI-native strategy
consulting
Experienced hires
We are hiring experienced top-tier strategy consultants
Quantified Strategy
Decreased uncertainty, improved decisions
Global Advisors is a leader in defining quantified strategies, decreasing uncertainty, improving decisions and achieving measureable results.
We specialise in providing highly-analytical data-driven recommendations in the face of significant uncertainty.
We utilise advanced predictive analytics to build robust strategies and enable our clients to make calculated decisions.
We support implementation of adaptive capability and capacity.
Our latest
Thoughts
No Results Found
The page you requested could not be found. Try refining your search, or use the navigation above to locate the post.
Strategy Tools
No Results Found
The page you requested could not be found. Try refining your search, or use the navigation above to locate the post.
Fast Facts
No Results Found
The page you requested could not be found. Try refining your search, or use the navigation above to locate the post.
Selected News
Term: Tensor Processing Unit (TPU)
“A Tensor Processing Unit (TPU) is an application-specific integrated circuit (ASIC) custom-designed by Google to accelerate machine learning (ML) and artificial intelligence (AI) workloads, especially those involving neural networks.” – Tensor Processing Unit (TPU)
A Tensor Processing Unit (TPU) is an application-specific integrated circuit (ASIC) custom-designed by Google to accelerate machine learning (ML) and artificial intelligence (AI) workloads, particularly those involving neural networks and matrix multiplication operations.1,2,4,6
Core Architecture and Functionality
TPUs excel at high-throughput, parallel processing of mathematical tasks such as multiply-accumulate (MAC) operations, which form the backbone of neural network training and inference. Each TPU features a Matrix Multiply Unit (MXU)—a systolic array of arithmetic logic units (ALUs), typically configured as 128×128 or 256×256 grids—that performs thousands of MAC operations per clock cycle using formats like 8-bit integers, BFloat16, or floating-point arithmetic.1,2,5,9 Supporting components include a Vector Processing Unit (VPU) for non-linear activations (e.g., ReLU, sigmoid) and High Bandwidth Memory (HBM) to minimise data bottlenecks by enabling rapid data retrieval and storage.2,5
Unlike general-purpose CPUs or even GPUs, TPUs are purpose-built for ML models relying on matrix processing, large batch sizes, and extended training periods (e.g., weeks for convolutional neural networks), offering superior efficiency in power consumption and speed for tasks like image recognition, natural language processing, and generative AI.1,3,6 They integrate seamlessly with frameworks such as TensorFlow, JAX, and PyTorch, processing input data as vectors in parallel before outputting results to ML models.1,4
Key Applications and Deployment
- Cloud Computing: TPUs power Google Cloud Platform (GCP) services for AI workloads, including chatbots, recommendation engines, speech synthesis, computer vision, and products like Google Search, Maps, Photos, and Gemini.1,2,3
- Edge Computing: Suitable for real-time ML at data sources, such as IoT in factories or autonomous vehicles, where high-throughput matrix operations are needed.1
TPUs support both training (e.g., model development) and inference (e.g., predictions on new data), with pods scaling to thousands of chips for massive workloads.6,7
Development History
Google developed TPUs internally from 2015 for TensorFlow-based neural networks, deploying them in data centres before releasing versions for third-party use via GCP in 2018.1,4 Evolution includes shifts in array sizes (e.g., v1: 256×256 on 8-bit integers; later versions: 128×128 on BFloat16; v6: back to 256×256) and proprietary interconnects for enhanced scalability.5,6
Best Related Strategy Theorist: Norman Foster Ramsey
The most pertinent strategy theorist linked to TPU development is Norman Foster Ramsey (1915–2011), a Nobel Prize-winning physicist whose foundational work on quantum computing architectures and coherent manipulation of quantum states directly influenced the parallel processing paradigms underpinning TPUs. Ramsey’s concepts of separated oscillatory fields—a technique for precisely controlling atomic transitions using microwave pulses separated in space and time—paved the way for systolic arrays and matrix-based computation in specialised hardware, which TPUs exemplify through their MXU grids for simultaneous MAC operations.5 This quantum-inspired parallelism optimises energy efficiency and throughput, mirroring Ramsey’s emphasis on minimising decoherence (data loss) in high-dimensional systems.
Biography and Relationship to the Term: Born in Washington, D.C., Ramsey earned his PhD from Columbia University in 1940 under I.I. Rabi, focusing on molecular beams and magnetic resonance. During World War II, he contributed to radar and atomic bomb research at MIT’s Radiation Laboratory. Post-war, as a Harvard professor (1947–1986), he pioneered the Ramsey method of separated oscillatory fields, earning the 1989 Nobel Prize in Physics for enabling atomic clocks and quantum computing primitives. His 1950s–1960s work on quantum state engineering informed ASIC designs for tensor operations; Google’s TPU team drew on these principles for weight-stationary systolic arrays, reducing data movement akin to Ramsey’s coherence preservation. Ramsey advised early quantum hardware initiatives at Harvard and Los Alamos, influencing strategists in custom silicon for AI acceleration. He lived to 96, authoring over 250 papers and mentoring figures in computational physics.1,5
References
1. https://www.techtarget.com/whatis/definition/tensor-processing-unit-TPU
2. https://builtin.com/articles/tensor-processing-unit-tpu
3. https://www.iterate.ai/ai-glossary/what-is-tpu-tensor-processing-unit
4. https://en.wikipedia.org/wiki/Tensor_Processing_Unit
5. https://blog.bytebytego.com/p/how-googles-tensor-processing-unit
6. https://cloud.google.com/tpu
7. https://docs.cloud.google.com/tpu/docs/intro-to-tpu
8. https://www.youtube.com/watch?v=GKQz4-esU5M
9. https://lightning.ai/docs/pytorch/1.6.2/accelerators/tpu.html

Polls
No Results Found
The page you requested could not be found. Try refining your search, or use the navigation above to locate the post.
Services
Global Advisors is different
We help clients to measurably improve strategic decision-making and the results they achieve through defining clearly prioritised choices, reducing uncertainty, winning hearts and minds and partnering to deliver.
Our difference is embodied in our team. Our values define us.
Corporate portfolio strategy
Define optimal business portfolios aligned with investor expectations
BUSINESS UNIT STRATEGY
Define how to win against competitors
Reach full potential
Understand your business’ core, reach full potential and grow into optimal adjacencies
Deal advisory
M&A, due diligence, deal structuring, balance sheet optimisation
Global Advisors Digital Data Analytics
14 years of quantitative and data science experience
An enabler to delivering quantified strategy and accelerated implementation
Digital enablement, acceleration and data science
Leading-edge data science and digital skills
Experts in large data processing, analytics and data visualisation
Developers of digital proof-of-concepts
An accelerator for Global Advisors and our clients
Join Global Advisors
We hire and grow amazing people
Consultants join our firm based on a fit with our values, culture and vision. They believe in and are excited by our differentiated approach. They realise that working on our clients’ most important projects is a privilege. While the problems we solve are strategic to clients, consultants recognise that solutions primarily require hard work – rigorous and thorough analysis, partnering with client team members to overcome political and emotional obstacles, and a large investment in knowledge development and self-growth.
Get In Touch
16th Floor, The Forum, 2 Maude Street, Sandton, Johannesburg, South Africa
+27114616371
