TLDR
- Most AI conversations in insurance fixate on speed and cost, but decision integrity is the real differentiator.
- Agentic and embedded AI are reshaping underwriting and claims, especially in auto, but poorly governed systems can silently amplify risk.
- The insurers that win will treat AI as a decision-system redesign problem, not a pure automation play, and keep humans visibly accountable for critical calls.
When most people talk about AI in insurance, they talk about speed and cost. Fewer talk about what I focus on here: decision integrity. Drawing on my work in EY’s Insurance Technology practice, and nearly two decades helping global insurers modernize their underwriting, claims, and risk platforms, my core message to executives is simple but uncomfortable: AI will not save a broken decision system.
Insurance is not a product business. It is a regulated decision system under uncertainty. Every policy issued, every claim paid, every fraud alert is a probabilistic call made with incomplete information and real human consequences. AI changes how those calls are made, not whether they are necessary. That distinction matters as insurers rush to embed machine learning into everything from pricing to claims triage.
The industry is entering a second wave of AI adoption. The first wave was about pilots and proof‑of‑concepts, isolated models that scored risks, flagged suspicious claims, or summarized documents. The next wave, which is emerging now, is about agentic and embedded AI: systems that do not just recommend but initiate workflows, orchestrate other systems, and operate continuously in the background. When you move from models that advise to agents that act, your risk profile changes overnight.
This shift is especially visible in auto insurance, where electrification, automation, and software-defined vehicles are reshaping exposure faster than traditional actuarial cycles can keep up. Advanced driver-assistance systems are lowering claim frequency but pushing severity higher as repairs increasingly involve cameras, sensors, and high‑voltage components. At the same time, liability is blurring between driver, automaker, and software provider. You cannot manage that complexity with static rules and annual pricing reviews. You need AI that can see change early, but you also need governance that knows when not to move fast.
Beyond the ‘fully automated journey’
That tension between speed and accountability runs through my work. I believe that one of the most dangerous misconceptions in insurtech is the idea of the “fully automated insurance journey.” Attempts to remove humans from underwriting or claims decisions will backfire – not because models are weak, but because institutions will struggle to explain or defend outcomes when customers or regulators challenge them. Regulators don’t oppose AI. They oppose unaccountable automation.
Instead, frame AI as a redesign problem. Who owns a decision when AI is in the loop? Who has the explicit right to slow down an automated process? How are trade‑offs recorded when a model recommends a technically optimal but socially or politically uncomfortable outcome? I believe that truly AI‑ready insurers start with those questions, then decide where to deploy models, agents, and automation.
Decision latency as a hidden risk
I use the term “decision latency” to capture one of the industry’s blind spots. Latency is the time between when relevant information is available and when an insurer actually acts on it. Historically, latency was an operational constraint, such as files on desks, batched jobs or manual reviews. In an AI-enabled world, latency becomes a design choice. Insurers can adjust prices, reserve levels, or claims strategies in near real time; the question is when they should. Move too slowly and you miss emerging risks. Move too quickly and you can propagate errors or bias at scale.
This is where agentic AI is both an opportunity and a threat. Well‑governed agents can orchestrate underwriting, claims, and servicing flows, reducing cycle times from days to minutes for standard risks. Poorly governed agents can create “chained vulnerabilities,” where a single bad input cascades through pricing, capital models, and portfolio steering. My proposed antidote is an “agent portfolio registry” and clear machine‑level roles essentially treating AI agents the way insurers treat human roles, with explicit scopes, privileges, and guardrails.
Competing on institutional intelligence
For insurtech founders and incumbents alike, my message may sound conservative. I argue it is not. It is a call to compete on institutional intelligence, not just algorithmic sophistication. The winners in the next phase of AI in insurance, I believe, will be those who can combine three capabilities: deep, domain‑specific models; architectures that expose uncertainty instead of hiding it; and governance that keeps humans visibly accountable where it matters most.
This isn’t about replacing underwriters or claims handlers. It’s about redesigning the decision system so that when something goes wrong – and in insurance, something always goes wrong – you can show not only what the AI did, but why a human still owns the outcome. For an industry built on trust, that may be the most important insurtech innovation of all.





