Most discussions about AI in insurance start with efficiency: faster quotes, automated claims, lower operating costs. After nearly two decades working with insurers and writing tech‑driven insurance, I’ve learned that those metrics, while important, are not enough. Insurance is, at its core, a regulated decision system under uncertainty and AI is rewiring that system.
Every policy, price, and claim outcome is a probabilistic judgment made with incomplete information. Traditional actuarial models and rules-based engines were built for a world where risk patterns moved relatively slowly. Today, climate volatility, cyber risk, behavioral shifts and software-defined products are reshaping exposure far faster than annual planning cycles can absorb. In that environment, AI is not optional, but neither is governance.
From pilots to agentic AI
In my work, I see two waves of AI adoption in insurance. The first was about narrow pilots-pricing models, fraud scores, document classifiers which are sitting at the edges of legacy systems. The second wave, now emerging, is about embedded and agentic AI: systems that can orchestrate end‑to‑end workflows across underwriting, claims, and servicing.
In auto insurance, for example, an AI agent might ingest telematics and vehicle build data, assess risk in near real time, and automatically adjust coverage or pricing for standard segments. Claims agents can triage losses, analyze photos, detect potential fraud, and trigger payments for simple cases. The value is clear: shorter cycle times, more straight‑through processing, and better risk selection.
If we get this right, AI will not replace underwriters or claims handlers.
The risk is less visible. When you move from models that advise to agents that act, you effectively change who is making decisions inside the institution. If you don’t redesign ownership, escalation paths, and guardrails, you end up with automated decisions that no one can fully explain when customers, auditors or regulators start asking questions.
Decision latency as a design choice
I think about this through the lens of ‘decision latency.’ AI allows insurers to shorten the time between signal and action – repricing segments, tightening underwriting criteria, or altering claims strategies far more frequently. But speed is not inherently good. In some contexts, like fraud escalation or catastrophe response, reduced latency is essential. In others, such as complex coverage disputes, introducing friction and human review is a feature, not a bug.
A sound AI strategy in insurtech distinguishes where you want near real‑time automation and where you deliberately keep decisions slower and more deliberative. In practice, that means defining which decisions AI can make autonomously, which it can only recommend, and which must remain human‑owned regardless of model confidence.
Agentic AI and new risk categories
Agentic AI magnifies these questions. In a recent playbook, I described how autonomous agents can transform insurance operations and how they introduce new categories of risk. A single misclassified construction type in commercial property can cascade into pricing, reinsurance purchasing, and capital models. An agent with poorly designed permissions can gradually extend its reach into sensitive systems it was never meant to touch.
These are not edge cases; they are structural properties of autonomous decision systems. My view is that insurers should treat AI agents as first‑class actors in governance, with explicit roles, privileges, and audit trails, rather than as invisible middleware. An ‘agent portfolio registry,’ for example, can catalog each agent’s purpose, dependencies and risk profile in the same way organizations manage critical human roles.
Competing on decision quality
Ultimately, I don’t see AI in insurtech as a race to full automation. I see it as a race to build better decision systems. That means organizations should do the following:
- Design architectures that expose uncertainty instead of hide it inside opaque models.
- Set clear escalation thresholds so that material decisions, bias signals or high uncertainty trigger human review.
- Keep humans visibly accountable where outcomes have lasting financial, legal or ethical consequences.
If we get this right, AI will not replace underwriters or claims handlers. It will focus their judgment on the problems that actually require human trade‑offs. And when something goes wrong – in insurance, something always goes wrong – we will be able to show not only what the AI did, but why a human still owns the outcome. For an industry built on trust, that is where real insurtech differentiation begins.




