- FICO uses generative and agentic AI to automate IT tasks and boost efficiency with smaller teams.
- Task-specific models and machine learning improve accuracy and protect sensitive data.
- Continuous measurement and cost control build trust as FICO moves toward near-autonomous incident management.
Financial analytics giant FICO is using generative AI to overhaul AIOps, the practice of applying AI to IT operations, building on its long history of using analytics to improve decision-making in business processes.
“It’s transforming how we do a lot of our technical operations,” said Mike Trkay, chief information officer of FICO, in an interview with The AI Innovator. “AIOps is a fantastic example of that.”
He said FICO is applying generative AI to several operational tasks, such as dynamic knowledge creation around its systems and solutions, diagnosis of the root cause of technical problems, resolution of incidents, and even development of “hot fixes” to address issues quickly.
That means smaller teams can now handle work that once required a larger team with multiple specialists. For example, fixing a problem might have needed separate experts for coding, testing and deployment; now, an engineer with AI support can do most or all of those tasks.
But Trkay remains adamant that the use of the technology expands what people can do rather than replaces them. “AI augments people,” he said. “I really don’t think of it as replacing people – and that’s an important message for a lot of technologists.”
At FICO, the CIO role combines traditional corporate IT with customer support and the technical operations behind the company’s software-as-a-service and platform-as-a-service offerings. That means AI adoption in technical operations isn’t an experiment – it’s a necessity.
One major shift is the use of AI agents to replace manual threshold-setting for system performance metrics, according to Trkay.
Agentic AI fits naturally into FICO’s operations because the company already thinks in terms of processes and tasks, he said. In internal use, agents can take over discrete steps such as setting thresholds, correlating telemetry, identifying root causes, executing fixes, and updating knowledge for future incidents – creating a continuous feedback loop. He emphasized that generative AI, machine learning, and agentic AI are “all tools in the toolbox” and the right choice depends on the job.
The company applies the same design principles from its fraud detection analytics – such as profile-driven anomaly detection – to detect abnormal system behavior before it becomes an outage.
“You don’t just trust it blindly. You measure it … and you use that to govern the outcomes.”
While generative AI is part of the toolkit, small language models trained for specific tasks are gaining traction inside FICO’s AIOps. These models, Trkay said, are “like a domain expert … because they know specifically the jargon, the technical terms, the contextual understanding to provide very effective answers,” which reduces hallucinations and improves accuracy in operational decision-making.
However, Trkay said machine learning remains the “workhorse” in AIOps for mission-critical decision-making and protecting sensitive telemetry data.
FICO’s AIOps environment is built on a hybrid cloud, mixing AWS, private data centers, and internal systems. The company runs commercial large language models in isolated environments, fine-tunes open-source models when needed, and develops custom models in-house with its pool of data scientists.
This approach ensures that incident data and proprietary operational knowledge stay protected. FICO doesn’t use a publicly available model but rather on its own instantiation of Amazon Bedrock to ensure that its IP is isolated and protected, Trkay said.
Running AI at this scale isn’t cheap, so he applies familiar financial disciplines to AIOps. “Number one is capture the data. Two is measure it, report on it. Three is you can actually use AI to help you optimize your spend on AI,” he said, noting that different models and configurations carry different costs, and should be matched to the use case.
Trkay noted that trust in AIOps isn’t automatic. “You don’t just trust it blindly. You measure it … and you use that to govern the outcomes,” he said. Monitoring includes technical metrics like deployment velocity, pipeline resilience, and compute efficiency, along with business outcomes such as ROI, cost reduction and productivity gains.
Asked if agentic AI is brittle, Trkay said it can be if not designed and monitored properly. Observability across agents and systems – ideally through integrated AI platforms – helps detect when outcomes are trending in the wrong direction, enabling quick corrective action.
In the next 12 to 18 months, FICO aims to advance AIOps toward near-autonomous incident management. Trkay said that includes expanding dynamic thresholding, improving event correlation, and embedding autonomous “healing” actions that can fix problems without human intervention – all while maintaining responsible AI practices.












