Press "Enter" to skip to content

AI Agents Help Teams Work Faster, But Can They Win Their Trust?

AI agents are showing up everywhere. They’re helping write meeting notes, summarize projects, and even answer emails.

According to our recent 2025 State of AI at Work report, 77% of workers say they already use them. Three out of four see agents not just as another productivity tool, but as a new way of working. People are planning around them too: Employees expect to delegate a third (34%) of their workload to AI in the next year, and nearly half within three years.

However, nearly two-thirds of workers (62%) say AI agents are unreliable. Without trust, adoption fails and the potential productivity gains never fully take off. How organizations bridge that trust gap will determine whether AI agents become real teammates or remain tools people hesitate to rely on.

What employees really think about AI agents

Workers aren’t rejecting AI agents, but they are setting boundaries. Our data shows most people are happy to let agents handle the repetitive stuff like scheduling meetings, organizing documents, and taking notes – tasks where mistakes are fixable and low-stakes. But when work involves creativity, strategy, or relationships, employees want to stay in control. Only 3% of workers are comfortable with AI agents representing them in meetings, for example.

This is one of the most important lessons for leaders. Employees aren’t as skeptical of AI as you might think. They just have a strong preference that AI help with tasks that are for them rather than about them. This nuanced understanding also explains one of our report’s most striking findings: Employees don’t fully trust agents today, yet they expect to delegate nearly half their work within three years.

Put differently, knowledge workers are waiting for agents to earn their trust. They are open to the idea of using agents at work but they need proof that these systems will actually work reliably and in their favor. The challenge – and opportunity – is achieving that trust.

Where accountability breaks down

Trust issues don’t just stem from personal comfort or discomfort. Instead, they’re often structural.

When agents make mistakes, our research found that most employees don’t know who’s responsible. About a fifth blame IT, another fifth blame the end-user, and a third simply don’t know who’s responsible. Oversight is equally inconsistent. Almost one third (31%) of organizations allow employees to create agents without manager approval.

This lack of structure and clarity is what we call ‘AI debt,’ the compounding costs of unreliable systems, poor data quality, and weak oversight. It’s made worse by a growing cultural divide between different levels of the organization.

Senior executives feel 85% more empowered to experiment with AI than individual contributors, according to our Scaling AI report. To close that gap, organizations need more than new tech. They need shared accountability, trust and transparency.

How to turn agents into trusted teammates

The companies that get this right will treat AI agents like teammates, not tools. That means giving them structure, oversight and space to learn like you would any new hire.

Here are four ways to get started:

  1. Build agents into real workflows. Nearly half of workers (48%) say agents misinterpret team priorities and 49% say agents don’t understand the context of their work. The fix isn’t more AI — it’s smarter integration.
    Embed agents into the natural flow of existing work where teams already collaborate so they can learn from real workflows, instead of operating in isolation. This could be in many typical business processes such as project intake, product launches or resource management.
  2. Make roles and boundaries clear. Only 19% of organizations have defined what humans vs. AI should handle. Create a simple map. Which tasks should be human-led, which should be AI-assisted, and which should be fully automated?
    Consider multiple dimensions such as the cost of errors, the need for reliability and the long-term value each task brings to your business. Define when human review is mandatory to avoid confusion and missed accountability.
  3. Establish feedback loops. More than half (56%) of workers say agents ignore feedback and don’t improve. Assign ‘AI champions’ or agent owners within departments to collect feedback, track performance, and work with IT on improving outcomes. Continuous feedback is how both humans and machines learn.
  4. Teach teams how to manage agents. Eighty-two percent of employees want training on AI agents, but only 38% receive it. Training should do more than just tell employees how to use agents. It should show people how to supervise them.
    The same skills that apply to managing people transfer to managing agents. How do you communicate clearly and effectively what needs to be done? How do you deliver feedback? What do you do when they make a mistake? When employees understand how agents work, they’re more confident delegating responsibility.

Overall, our findings point toward optimism rather than fear. Workers might not trust AI agents yet, but they see genuine potential in them. Their questions tell us what needs to change for trust to catch up with adoption.

Organizations that thrive in this next phase will be the ones that provide agentic tools with the context, clarity and collaboration that they need to succeed. Get those right, and employees will stop working alongside AI and start working better because of it.

Author

×