Select Page

Events | Global Advisors updates

Sapiens, Microsoft & COVER Innovators Network – AI as a business model and relationship enabler, not just an operational tool

Sapiens, Microsoft & COVER Innovators Network

Event: Setting the strategic agenda for insurance technology in 2026

Transcript

 

Redefining value by redesigning the business ecosystem.

A closing keynote for South African insurance CxOs on what AI means for the business of insurance itself — not just the technology.

Event date: 11th March 2026

Marc Wilson

Marc Wilson

Managing Partner, Global Advisors · Johannesburg

Opening video

Ilya Sutskever

Now, AI is a great thing because AI will solve all the problems that we have today. It will solve employment, it will solve disease, it will solve poverty, but it will also create new problems. The problem of fake news is going to be a million times worse. Cyber attacks will become much more extreme. We will have totally automated AI weapons. I think AI has the potential to create infinitely stable dictatorships.

Co-founder and former Chief Scientist of OpenAI, widely regarded as one of the architects of the modern deep learning era. Departed OpenAI in 2024 to found Safe Superintelligence Inc. (SSI), focused exclusively on building safe superintelligent AI. The quote shown was from The Guardian, 2 November 2023.

Marc Wilson

Setting the context

Right, so that was Ilya Sutskever, who is one of the brains behind AI development. [Dr Imtiaz Kader, present in the audience, was acknowledged separately at this point as a fellow AI thinker known to the group.]  I’m sure you’ll recognise him.

I think that when you listen to that, it can seem very out there.

One of those things, which is hopelessly clichéd, and hopefully the only cliché that I’ll say today, is that technology typically moves slower than you expect in the short term and much faster than you expect in the long term.

If you look at that, and you look back, I think it is instructive to consider the changes that we have already seen before. I have been around long enough to have lived through the dot-com revolution, and I was head of Gemini Consulting’s e-business unit at the time. Those of you who lived through that will recognise a lot of the hype that we might see today, and you might even look at that and say, you know, this sounds like hype.

But if you look back and say it has been a relatively short period of 20 years, what has happened to entire industries like the media industry? It has been completely revolutionised. We do not have newspapers as we used to have them. TV has changed. Media has fundamentally changed.

It is interesting when I compare my interaction with the heads of life insurers and others in the early 2000s, and some of the things that they feared were going to happen, and how those are coming up again in the context of AI. Robo-advice is not new, but it is something that we were talking about in the 2000s. But with the context of AI, it changes completely.

That experience is important, and it is really important, I think, to draw on what we have experienced and recognise that in what we see going forward. It is not all new. Revolutions follow a pattern.

Marc Wilson

Who I Am and the AI-Native Journey

I have lived the journey from viewing AI as just a stochastic parrot that tells you back the information that it has been given, to seeing the application of AI about two years ago in workflows, and what that inference capability was capable of when it sat within a deterministic framework. That really opened my eyes as to what was coming. There were things that were impossible, but now were easy. That is when I started building our business as an AI-native business.

And if there is one business that is changing probably faster than the insurance industry, it is consulting. So what we did over there is that we had to start with vision and belief. Vision and belief that this was going to be a fundamental change. Some of our much bigger competitors, with 40 000 or even 200 000 employees, have got a far harder job on their plates: to convince people and take them with them. But that has to be the starting point, and that is where we started.

Then came experimentation and learning. We are on our third generation of GPUs in our business, on-premise, and that is within a two-year period, putting in our own sovereign AI capability. We deal with regulated data. We are not allowed to put a lot of that in the cloud. We sign agreements with our banking clients that we will not process that outside our premises.

 

Then there is physical AI infrastructure, frontier API connectivity. We have over 600 models available to our staff. We have 150 000 RAG documents already and 10 billion tokens used. So we are some way down the track. We are moving as fast as we can. It feels breakneck, and yet it is not fast enough when we look at some of the events happening globally.

Then that goes through to data, knowledge, ontology, and a lot of the things that you will recognise.

 

 

600+
AI models available to staff
150k+

RAG-indexed documents

10bn+

Tokens used

100%

Employee AI usage, 5 days per week

GPU generations deployed on-premise in 2 years

Marc Wilson

Disruptive Technology Demands a Different Strategic Response

With all of that, my pitch to you today is this: if you are a leader in your organisation, a CEO or a CIO or an AI head leading this journey on behalf of your organisation, then looking back in order to look forward, I think there are things we can learn.

The first is that if you look at disruptive technologies, the first job of disruptive technology is to take business strategy and translate that into technology strategy. Alignment has been the number one issue in IT strategy for as long as I have been involved in IT strategy, which is a long time now. That alignment says technology needs to support the direction of the business. CIOs have earned their seat at the table on that basis.

But with disruptive technology, opportunities arise through the technology itself. That demands an entirely different role from a CIO and a CEO. It demands that you engage beyond technology, and across every facet of the business, in order to take the business with you into those technology-enabled opportunities.

If you are the person with the lucky or unlucky hat of taking AI forward in your business, your CEO has a million fires burning. How are you making this available to them in a way that they hear the most important things and steer the business on an as-fast-as-needed basis?

And if you are that CEO, you are juggling all of that. I deal with executive boards at the top level in South Africa and globally, and typically it is a crowded agenda. People have too much on their plates.

When you talk about something like this, and it potentially changes everything, how do you actually create the space to have those conversations and take the organisation with you?

Marc Wilson

What This Means for the Business of Insurance

You have heard a bit from Microsoft about what is coming and the cutting-edge work there. You have heard from Sapiens about how the core system, or the so-called systems of record, are changing. From Absa, you have heard a bit from the front lines about what the experience is over there. My job is to try and figure out what this means for the business of insurance itself. I think there are three big shifts.

Value Creation

The first is value creation: claims and service economics, underwriting or risk selection, intermediary productivity, and client decision support. In that space, AI potentially changes all of them. The difficulty of having a niche underwriting area, when the information is now broadly available to everybody, changes when people have access to the same technology as you and can generate incredible insight.

But it also flips around and equips your customers. Your commercial customer now has expert legal advice from AI sitting in their version of Co-work, evaluating the policy that is being put in front of them.

One of the things that came out of Harvard a few months ago was this: do not think that technology is an advantage when everybody will ultimately have it.

That means we have to keep up as fast as possible before we get left behind and disrupted.

Reference

Harvard Business Review — On Technology as Competitive Advantage

Research from Harvard published in the months prior to this keynote noted that technology alone cannot be a durable competitive advantage once it becomes universally accessible. The strategic imperative shifts to how technology is applied, and how fast.

Distribution and Client Ownership

Then there is distribution and client ownership. If you look at AI-augmented customers, particularly on the retail side, and what happened with OpenClaw, they released it as a consumer-oriented product. Jensen Huang said it was the most important software technology ever developed.

OpenClaw was released as a consumer-oriented product. It is an agent. You install it on your PC or your Mac, typically, and it has full access to everything, all of your Google data and so on by default. That is changing a bit. They are dialling it back and adding more security.

Just imagine that you now have an agent telling your customer, and this can apply on the retail or commercial side, “You are paying too much on your premium. I have done a comparison. Would you like me to negotiate some rates?” On the consumer side, that is very believable. On the commercial side, maybe it is more of a prompt.

I deal a lot in banking. It is very real there where you have lazy cash balances that fund pretty much the entire banking system. Lazy cash balances are under threat from agents that say, “Sweep the money aside overnight and I will do it for you.” That type of behaviour will come to insurance too.

I think the other part is that this is a compliance-driven industry, and the challenge is going to be for compliance to keep up. I am old enough to remember when compliance was forced onto the intermediary channel, and as a result, in the UK and South Africa, it forced many into broker networks because of the costs. I think we will see some of that again, where individual brokers cannot keep up with the technology. They will need to leverage technology or platforms provided by the insurer. That is an opportunity to increase trust in the relationship and increase the amount of information sharing, with all the associated challenges of who owns the information.

The danger is, if you get this wrong, you just become a balance sheet. That faces banking, and that faces insurance. You do not want to be in a position where you are just balance-sheet bidding for risk.

Trust and the Operating Model

Then there is trust in the operating model. I think the Microsoft platform talks about embedding trust, authentication and visibility throughout the stack. That is rare. Being able to have auditable information backwards — to say what information was used — is a POPIA requirement. What was the system trained on? What was the model trained on? How was it used? How did your broker or your intermediary make a decision, and what information was made available to them? That becomes incredibly challenging when you have multiple levels of baked-in information.

So there is a huge challenge in terms of security, access to information and auditability. But it is also an opportunity. If you can provide compliance systems, people will gravitate towards you in order to make sure that they are guaranteed compliance. This is not a technology issue alone. It is about where the intelligence sits and who owns the customer. That is both the threat and the opportunity.

Marc Wilson

Harnesses, Scaffolds and the New Moat

One of the things which I think is very powerful is the concept of harnesses in AI. A harness is what has taken AI from being the model of ChatGPT 3.5 to being what you are seeing today, where you have the scare trade disrupting entire industries.

When Anthropic released their legal skill, I think it was two weeks ago, the NASDAQ and the S&P responded instantly. Billions were wiped out. We have not seen that happen to insurers yet, but I would argue it is pretty close.

Reference

Anthropic — Legal Skill Release (early 2026)

Anthropic released a specialised legal reasoning capability for its Claude models, triggering a market reaction in legal and professional services stocks. The event was widely cited as an example of the “scare trade” — where an AI capability announcement causes an immediate repricing of exposed sectors without a single product being sold.

What a harness is, is the skills and knowledge that are put around a model, which give you more reproducible outputs and a more deterministic outcome based on the inference. That has been the secret to the jump in capability that we have seen. 

We started with the model and have come a long way, and potentially we are facing diminishing returns on the model itself. But what we are now seeing is the impact of the scaffolding put around that.

That includes memory, the tools you make available to the AI, the documents, and the sort of thing Microsoft talked about when it discussedturning documents into an agent.

Then there is the harness. The harness took one of the models from position number 30 in capability to position number 5.

So the challenge, if you take this harness concept into the insurance industry and into your organisation, is this: what is your moat? What is your harness?

#30 to #5

Model benchmark ranking improvement from harness alone, no model change

10mins

Goldman Sachs deal prep time vs. 2 weeks previously, using AI harness

<1%

Training cost to achieve 5–20× compute-equivalent capability gain via scaffold

The next thing Anthropic has been working on has been the investment banking industry. They have been working with Goldman Sachs, and they hired 40 investment bankers into their business. They have now released a product into the investment banking industry that has senior deal-making advice embedded in it. And what [David Solomon, CEO of Goldman Sachs,] has said is that it has taken what was typically a two-week deal-prep exercise, valuation and so on, down to 10 minutes. That is incredible IP now being invested in the harness that sits around the actual model capability.

What is your harness? What is your IP?

Satya Nadella has said that people are being fast and loose about putting a lot of their IP into models in the open domain. Sovereign AI is absolutely critical. If you think of the insurance context, with all the information regulation and the possibility of developing competitive advantage around harnesses, it is absolutely crucial that you start thinking about how you are embedding that harness as your moat around the raw power of inference and AI underneath it.

You have to codify your institutional knowledge.

Who is this?

Satya Nadella — CEO, Microsoft

Chief Executive Officer of Microsoft since 2014. Under his leadership Microsoft became the world’s most valuable company and made a multi-billion dollar investment in OpenAI. His “sovereign AI” remarks reflect a consistent public position that organisations must retain custody of their proprietary data and AI infrastructure rather than ceding it to third-party platforms.

This is not something where you just drop your knowledge into a model, train it up, and it becomes an alternative. You have to get the information and the data into shape before it is usable in the AI sense.

I think that is something most people have been slow at, and it takes a long time.

If you look at one of the most bizarrely valued companies of all time, Palantir, that is their speciality. Palantir takes data and creates intelligence. It does that by making the data compatible with AI.

Is your data compatible with AI? Is your IP and information compatible with AI?

Reference

Palantir Technologies — Data Intelligence Model

Palantir (NYSE: PLTR) specialises in making messy, siloed enterprise data compatible with AI inference. Their Ontology layer — a structured mapping of business entities, data relationships and processes — is directly analogous to what Marc describes as making your IP “AI-compatible.” Widely cited as one of the most unusually valued companies in tech (trading at significant revenue multiples), Palantir’s model is nonetheless increasingly relevant as organisations discover their raw data is not usable by AI without significant preparation.

Prioritise model agnosticism, because models change radically. On my phone right now, I have got the Qwen3.5 model with the equivalent capability to GPT-4o from just a year and a half ago. It is a [4]-billion-parameter model. It is available as open source, for free, and it is on my phone. It is not in a data centre somewhere. How are you going to keep up with the jumps in capability that underlie the intelligence being applied to your system?

Then elevate human capital, because human capital sits above this.

Hopefully we are moving from masses of clerical work. I can remember in the early 2000s walking the halls and seeing masses of paper being scanned into OCR systems, and the people bustling around those processes.

I think the world is changing. We are moving on from masses of task-oriented work to much more intelligent work. That requires levelling up your human capital.

Coding has been the catalyst. These models are now developing themselves. AI is developing itself.

Those of you familiar with the term “the singularity” will know that it is the point where self-improvement starts to take place. If you trace the numbers, we are seeing an exponential effect that has accelerated since February 2026. Capability has taken off.

Marc Wilson

Executives Must Consider the Second-Order Effects

The other part of this is that your job, if you are the influencer around AI or the CEO, is to consider the second-order effects. It is not just about how to apply AI in your organisation. It is also about what AI is doing to the market around the organisation, your customers and the society around that.

Something simple at the basic level is AI legibility. Is your website, are your policies, are your contracts available in AI-legible form? Because potentially they are going to be consumed through APIs. If they are consumed by those APIs and then interpreted by AI, you want your information to be compatible with that process.

People are already saying there is the death of the web — the “dead internet” — where websites will die and now just be text for AI to consume. People will not be browsing the website to go through all the policy details. They will be directing an agent there to say, “Pull the information and give me the latest comparative data on that versus competitors.”

But that goes all the way through the chain. Is that AI legibility baked into your processes, how you deal with your clients, your intermediaries and your commercial customers?

I think there are some other things that have not been mentioned. There is also the analogue backlash. Already we are seeing, at a consumer and societal level, people checking out and saying, “I want nothing to do with this. I do not want a world with AI.” That is a profound effect because it can make your markets heterogeneous. You have difficult, expensive-to-serve customers, and you have people who are completely AI-native. You will also have these people within your organisations who are saying, “I’m not interested.” I am sure you all recognise that.

By far and away, I think the most difficult thing about AI is the speed of disruption. The speed at which this is happening is more important than the disruption itself. Very few organisations are equipped to cope with that speed.

Do not just be a technology influencer. Your role as an AI influencer is to look at the organisational context more broadly and start looking at the AI impacts on broader society, the people in your organisation, and the market opportunities.

Do not just think of this as efficiency. The market is really struggling with this right now. One of the OpenAI executives and I were having a conversation about two weeks ago, and they were saying that the predominant discussion with customers is how to take out costs. If you look at the value equation, the value from reducing costs and improving margins is dwarfed by that from growth. What are you doing with your AI to fundamentally change the growth trajectory of your business?

BRAND DEVALUATION

AI agents optimise on price-spec data; brand premiums compress in commodity categories.

FRAUD & DEEPFAKES

Deloitte: $40B AI-enabled fraud by 2027; liar’s dividend erodes institutional trust.

DISINTERMEDIATION

$3–5T in AI-mediated commerce by 2030 — brokers, agents & advisors face existential risk.

ANALOGUE BACKLASH

Vinyl at $1,9B; craft kit sales up 86%; human-made certifications emerging fast.

HEALTH & LONGEVITY

AI longevity gains stress pension systems — US Social Security gap: $26.1 trillion.

Early AI detection paradox: more diagnoses ? more treatment ? higher total spend.

DISRUPTION SPEED

This is by fastest revolution in history.

AI LEGIBILITY

Restructure data assets for machine consumption — MCP endpoints, llms.Txt, structured apis. Firms invisible to AI agents become commercially invisible.

LABOUR DISPLACEMENT

Labour share hit 53,8% in Q3 2025 — lowest since records began in 1947.

Professor Andrew Ng has said that when you go from a home loan approval process of today to an approval process of 10 minutes, you do not just have a more efficient home loan. You have a different product. So how are you thinking about redesigning your product so that it is a different product and results in better business and more business?

Who is this?

Professor Andrew Ng — AI Researcher & Educator

Co-founder of Google Brain, former Chief Scientist at Baidu, founder of DeepLearning.AI and Coursera’s AI programmes. One of the most cited AI educators globally. His point about the home loan — that a 10-minute approval isn’t a faster version of the old product, it is a fundamentally different product — has become a widely quoted frame for thinking about AI-driven product redesign rather than mere process efficiency.

Labour Market & AI Disruption video

Kristalina Georgieva — Managing Director, IMF; Dario Amodei — CEO, Anthropic; Mustafa Suleyman — CEO, Microsoft AI

Now, my main message here is the following. This is a tsunami hitting the labour market. And even in the best-prepared countries, I do not think we are prepared enough.

Kristalina Georgieva

Managing Director, IMF

Kristalina Georgieva — Managing Director of the International Monetary Fund (IMF) since 2019. The IMF’s research estimates that AI could affect up to 60% of jobs in advanced economies and 40% in developing markets.

It is surprising to me that we are, in my view, so close to these models reaching the level of human intelligence, and yet there does not seem to be a wider recognition in society of what is about to happen. It is as if this tsunami is coming at us, and it is so close, we can see it on the horizon, and yet people are coming up with these explanations: “Oh, it is not actually a tsunami. That is just a trick of the light.” I think along with that, there has not been a public awareness of the risks.

Dario Amodei

CEO, Anthropic

Dario Amodei — CEO and co-founder of Anthropic, one of the world’s leading AI safety companies and developer of the Claude series of models.

I think that we are going to have human-level performance on most, if not all, professional tasks. So white-collar work, where you are sitting down at a computer, either being a lawyer or an accountant or a project manager or a marketing person, most of those tasks will be fully automated by an AI within the next 12 to 18 months. We can see this in software engineering. Many software engineers report that they are now using AI-assisted coding for the vast majority of their code production.

Mustafa Suleyman

CEO, Microsoft AI

Mustafa Suleyman — CEO of Microsoft AI, co-founder of DeepMind (acquired by Google). His 12–18 month timeline for full automation of white-collar tasks is among the most aggressive mainstream forecasts from a sitting technology executive.

Marc Wilson

What Do You Do About This?

So, who is ready for full automation of every white-collar task in 12 to 18 months? I am not sure whether you believe that. I am not sure whether you can believe that. But it is certainly a provocative view, and it is from the head of AI at Microsoft. What do you do about this? Start by thinking differently.

 

I think one of the experiments I am seeing a lot right now involves two popular phrases. One is steel-manning. Steel-manning is taking an argument you disagree with, making it as strong as possible, and living in that world for a moment. That is very powerful when you are faced with discontinuous change because it says, “Let me suspend disbelief. Let me live there for a while.”

Then there are thought experiments.

 

Who saw the [Citrini Research] paper that went out two weeks ago and hit the stock market in the US? The Citrini paper was a provocative paper put out by an analyst firm, which looked back from 2028. It took billions of dollars off the US stock market. What it said was this: fundamentally, look at friction. AI will take out friction. Look at intermediation. It will disintermediate.

The other thing is to flip the script. Cost efficiency? Fine. But where are the opportunities to grow and attack? With everything, ask: what is the opposite?

Reference

Citrini Research — “The AI Scare Trade” Paper (early 2026)

A widely circulated research note from Citrini Research that modelled AI’s economic impact by looking backwards from 2028. By identifying sectors with high friction and heavy intermediation, the paper argued these would face disproportionate disruption. DoorDash’s stock fell sharply on publication. The paper is credited with triggering the “AI scare trade” — a new pattern of stock market reactions to AI capability announcements in exposed sectors. Note: the correct name is Citrini Research, not “Centrini” as occasionally referenced in AI discussions.

Marc Wilson

Organise for Impact

I ran a corporate incubator in a previous life for a client once. I would argue that this is the time to bring that kind of learning and capability back up, where you walk into an organisation, set up the processes, and bring order to the AI pilot chaos. How do you go through a structured process of running AI through your organisation on a prioritised basis, applying it and delivering value?

Most of the leading-edge feedback coming out of the US in particular is that the day of letting a thousand flowers bloom is over. We need value from AI now. The amounts of money being spent are so great that we need to be able to demonstrate value. There could also be M&A requirements, spin-ins and spin-outs. A lot of that comes with a corporate-incubation mindset.

I think the other part is portfolio approaches. VCs think in terms of one in five or one in ten bets paying off. I am not sure what the exact ratio is going to be for AI, but you should be thinking about that too. What are the bets that are speculative, and which are the sure bets that you are absolutely putting yourself behind?
You are going to need a selection because of the uncertainty involved.

Marc Wilson

Retool Your People

Somebody mentioned ways of working. It is one of my favourite sayings in this space. I think you fundamentally have to look at how people are working and how AI changes that. Most people — 60% of people — merely use AI as a search engine. Only 0,3% of the world’s population has tried coding with AI. That is the gulf. Sixteen percent of the world has been exposed to AI, and of that, 75% is free AI. We are in the early stages.

75%

Of AI users globally use free-tier only

60%

Use AI only as a search engine

0,3%

Of world population has tried AI-assisted coding

16%

Of global population has had any AI exposure

60%

Of jobs in advanced economies estimated to be impacted (IMF)

40%

Of jobs in developing markets estimated to be impacted (IMF)

So when you look at that exponential curve, you have to imagine the tsunami coming behind that, and what IMF CEO Katarina Georgieva was talking about.

[The IMF] estimates that 60% of jobs in developed markets will be impacted by AI, and 40% in developing markets. Are your people ready? Are your ways of working ready?

Are you launching cultural work inside the organisation on how to get the most out of AI?

Most of what we hear is, “How do I prompt better? How do I prompt better to get what I want back from the chatbot?” That is where most people are who actually have access.

There are also different ways of looking at this.

China and the UAE have gone for a diffusion approach — where you make the technology widely available. In the UAE, the K-12 system, basically Grade 1 to Grade 12, starts at Grade 1 with AI training and exposure. China is also going for the diffusion approach. With OpenClaw, Tencent offered a service where they put their tech staff on the pavement this last weekend. They had queues around the block of people coming from all over to help them configure their laptops, PCs and phones to use AI.

But you also have to come in from the other side. What Accenture, Microsoft and some of the others are saying is that if you do not use AI, and they will track it, you will have diminished promotion prospects. So are you thinking about changing the HR processes in your organisation? Are you thinking about your training processes and your onboarding processes?

Because if you just throw in the technology, you are not going to be successful.

Many of your technology people came through rapid application development and agile. Possibly the time has now come where the process you previously did for prototyping is something you can use for prototype and launch within a highly accelerated period of time.

So think about the 10 days for Cowork coming out of Anthropic, without human intervention.

Boris Cherny, the head of development at Anthropic, has said he no longer codes. It is now completely automated. Think about the 10 days for that product coming out of Anthropic, without human intervention.

Who is this?

Boris Cherny is Director of Engineering at Anthropic

Cherny leads the Claude Code project — Anthropic’s autonomous AI coding agent.

Marc Wilson

Build the Engine

Experiment relentlessly.

We face the real prospect of dying under work slop — work generated by AI. Again, you probably have exposure to AI and use it in a good way. But many of the people in your organisations will use this simply to generate anything and everything. That increases cognitive overload for other people in the organisation who then have to adjudicate it. I do not think, as organisations, we are ready for the avalanche of stuff coming out of AI.

Microsoft showed us how quickly co-workers can generate a memo, an email, and so on. Content comes fast. Look at the tail. The tail can kill you here. Look at the tail in every sense: the tail of your customers, the tail of your organisation. Lifting the tail is absolutely crucial.

I also have an alternative view. If we do see a crash, I think we could see token scarcity, token-factory scarcity, meaning processing power, because the funds will dry up. If you do not have access to AI, and it has been built into your core systems, and they now suddenly cannot get processing power to provide inference because it is sitting on some monthly API, you will not have the capability to run your core business. So sovereign AI and token capability accessibility are absolutely key.

Even if we do not have a crash, if you look at the usage graphs, usage is taking off.

TSMC and ASML are fully booked until the end of 2027. There is no more capacity coming than what is being piped in right now. The machines just cannot build it.

Supply Constraint — AI Infrastructure

TSMC & ASML — Fully Booked to End of 2027

Taiwan Semiconductor Manufacturing Company (TSMC) and Dutch lithography equipment maker ASML — the two most critical chokepoints in AI chip production — both reported order books fully committed through the end of 2027 at the time of this keynote. No material increase in AI compute supply can arrive before then regardless of investment levels. This constrains token capacity globally and supports Marc’s case for organisations securing their own sovereign AI infrastructure.

Marc Wilson

The Emerging Picture of the Perfect Employee for the AI Age

There are only three things I am going to pull out of ten. The first is high agency. When we discovered that phrase, it was the one that resonated. It is the person who will take on a task and say, “Leave it to me, I’ll get it done.” That person, with the addition of AI, becomes a superhero. It is probably the most important competency descriptor in the age of AI.

The next one is learning agility. It is not about people who have just been through university with AI, because guess what? We interview those people, and often it is bad. They are not really being educated in AI. If you take somebody on who is young and expect them to bring AI thinking into the organisation, two things are often true: first, they probably do not have it; and second, they do not know your organisation. So what you need is the high-agency, experienced person, possibly the person who ran your robo-advice pilot from 10 or 15 years ago, who has now got AI to apply to the problem. That person will be a superhero.

Then resilience. Resilience is a life lesson. It is somebody who can cope with change and cope with the fact that everything you did is now being invalidated or written off, and now you have to start the next stage of the journey. Being able to move on, not be tied to the past, and take those knocks is tough.

So those three, and then there are a number of others. You might say that when you look at all of these together, that just sounds like any perfect employee, never mind AI. The thing about that is this: AI is merely an amplifier. AI is an amplifier for your people. Success will not be knowledge. It will be character. Do you have the character in your people to actually get the true advantage out of AI?

High agency

Proactive ownership, self-direct, and execute with a bias for action.

Learning agility

Quickly acquire new knowledge and unlearn outdated expertise.

Resilience

Recover quickly from setbacks, manage stress, and sustain optimism during periods of volatility.

Adaptability and change capacity

Thriving in ambiguous environments and shifting approaches.

Critical Thinking and AI Judgment

Rigorously evaluate information and verify AI-generated outputs.

EQ and Collaboration

Excelling in uniquely human areas that machines cannot replicate.

Metacognition

Ability to plan, monitor, and reflect on one’s own thinking processes.

Curiosity and a Growth Mindset

A fundamental belief that abilities can be developed through dedication.

AI Fluency and Human-AI Collaboration

Possessing a foundational understanding of generative AI tools, their capabilities, and their limitations.

Ethical Judgment and Oversight

Providing the moral compass necessary to supervise autonomous AI decision-making.

Marc Wilson

Five Questions to Finish

So, five questions to finish:

Where is the value created?

Whoever owns the harness owns the value chain.

How should technology reshape distribution?

AI makes the broker relationship more important — but also more demanding. Find the intelligence partners.

Which parts of insurance must stay human-led?

Automate the transactions. Hold the judgement.

Are insurers building technology for efficiency or for clients?

You cannot do both. Ground it in what you are doing for the customer.

Who owns the client?

At the moment — nobody. That is the opportunity.

Marc Wilson

Thank you.

AI is merely an amplifier. Success will not be knowledge — it will be character.

Marc Wilson

Managing Partner, Global Advisors

Download brochure

Introduction brochure

What we do, case studies and profiles of some of our amazing team.

Download

Our latest podcasts on Spotify
Global Advisors | Quantified Strategy Consulting