Press "Enter" to skip to content

Asana CIO: AI Agents Work Best as Teammates, Not Tools

  • AI agents are widely adopted but often unreliable, forcing workers to redo their work.
  • Asana’s AI Teammates use context, checkpoints and control to deliver more accurate, transparent results, according to CIO Saket Shrivastava.
  • Human-AI collaboration, not full autonomy, is key to improving productivity and reducing costs.

As enterprises rush to adopt generative and agentic AI, the results so far have often fallen short of expectations. Companies eagerly deploy agents to automate tasks, only to discover that the outputs are unreliable, lack context or require extensive human rework.

In a survey of 2,000 U.S. and U.K. knowledge workers from Asana’s Work Innovation Lab, the company found that adoption of AI agents is rising quickly, but confidence in them remains low. Workers said they want to delegate 27% of their work to AI today, a figure expected to grow to 43% within three years. But 62% find AI agents “unreliable,” more than half say that agents ignore feedback or hallucinate and 54% would be forced to redo the agents’ work.

With these findings in mind, Asana believes it has found a way to close that performance gap with AI Teammates, a class of collaborative agents introduced today designed to work as coworkers to human employees, not simply function as tools. That means these agents have the right context, defined responsibilities and embedded feedback loops with accuracy as the top performance metric.

“Humans are willing to try out AI, but the results are not great at this point in time,” Saket Shrivastava, Asana’s chief information officer, told The AI Innovator today from the Asana Work Innovation Summit in London. He said what’s missing is the three Cs: context, checkpoints and control. In each of these areas, he said, “AI Teammates does an amazing job at being able to capture that value.”

AI Teammates succeeds because it understands a company’s business landscape across departments and jobs, Shrivastava explained. While the industry works toward making AI agents autonomous, the “true opportunity” lies in human and agent collaboration, he added. Business workflows affect many teams, tap diverse datasets and touch the entire organization. To succeed, AI agents must have access to a company’s operational blueprint to see who is doing what, when, how and why.

Unlike traditional agents, AI Teammates are not black boxes. Instead, they operate transparently within Asana’s Work Graph – a data model that maps an organization’s tasks, projects, goals and dependencies. These agents also respond to feedback and learn from it, according to Shrivastava.

This visibility extends to costs. Since every prompt and output generates a cost, AI agents working in the background can incur substantial costs without a company knowing upfront. Asana developed AI Teammates with administrative visibility and usage limits to keep AI costs predictable.

Diving into the 3 Cs

Asana is betting that the three Cs embodied in AI Teammates are the winning combination to making agentic AI succeed in the enterprise.

First is context. Most AI systems, even powerful large language models (LLMs), work with limited situational awareness. They generate outputs based only on the prompts they’re given, often without understanding the broader goals or dependencies of a business process. That lack of context is a major reason AI can hallucinate or confidently produce incorrect answers.

“There are so many times wherein these agents or AI can go rogue. It can hallucinate and it can give you incorrect results,” he said. “Sometimes people see it, and they get jaded and disappointed and walk away. Sometimes people don’t realize it, and then they go down the path of incorrect execution, trusting AI, which confidently can lie to you.”

“And this is where I think Asana, with our Work Graph data model – what we call the ‘pyramid of clarity’ – is purpose-built where you’re able to structure any work that you do at any organization. Because you have that structure, you’re able to have the right context provided to LLMs. That’s why the output you’ll get from AI from Asana is going to be better quality.”

The second pillar is checkpoints. AI Teammates make their reasoning visible, showing step-by-step plans and decision logic. That transparency lets teams understand what the AI is doing and why – and step in to course-correct if needed.

“How are we designing a platform for workflows where human and AI can interact with AI being able to show its work – the explainability?” Shrivastava said. “AI Teammates can do that. … It shows the thinking and reasoning against which it’s making work happen.”

Finally, control addresses enterprise concerns about governance, security and cost. AI Teammates operate under the same permissions and access controls as human users in Asana, and customer data is never used for model training. Organizations can decide who creates and accesses AI Teammates, what data they can see, and even which LLM they use – including custom models fine-tuned in-house.

Pre-configured out of the box

AI Teammates are a way to scale work without necessarily scaling headcount. Shrivastava offered a simple example from his own IT team:

“As an IT leader, as a CIO, I could have gone and hired maybe an IT support specialist. Now I have an AI Teammate that is doing the job,” Shrivastava said. “It investigates. It will see a ticket coming in. It will triage it. It will see if it’s got complete information or not. It can respond on behalf of a human as well and keep the human in the loop.”

Shrivastava stressed that the aim is not to replace people but to elevate their roles. “It’s not to say that you don’t need any” human staff, he said. “You have people to oversee and supervise and approve and have the checkpoints be provided. You just need fewer of those people, and maybe you can redirect those people to do something more complex and strategic.”

Out of the box, Asana is offering AI Teammates preconfigured for the specific functions in marketing, product development, IT, operations, and project management. For marketing teams, a ‘campaign strategist’ agent can plan and coordinate campaigns, draft briefs, propose timelines, and track deliverables. For engineering teams, a ‘sprint accelerator’ agent can create and review sprint goals, monitor progress, and flag blockers. In compliance, an AI agent can audit vendor requests against security standards and draft summaries for decision-makers.

“AI Teammates don’t start from a blank slate. They inherit all the rich context that we have,” the CIO said.

Organizations can also build their own Teammates with custom prompts and domain-specific knowledge. They can restrict access to specific projects or data sets, ensuring that the AI has only the information it needs – much like assigning a human team member to a department.

Shrivastava believes the new offering addresses the frustrations many organizations feel with agentic AI today. “People want to try AI agents, but they’re not getting the quality of output right,” he said. “Because of that context, those checkpoints, and the controls, the outputs are better. … This will help bridge that divide between expectation and reality.”

Early adopters are already reporting results. One global advertising company is managing all of their requests – in the thousands – through an AI Teammate with humans supervising, Shrivastava said. “AI is able to check for creative files against a database of specifications that have been provided and confirmed to those standards, so they’ve got a massive win.” A financial services company has seen “significant” savings in time and human resources by using AI agents to analyze an “extensive” portfolio of price deals,” he said. “Their AI Teammate is completing comprehensive analysis efficiently that a human would have to do.”

Not fully autonomous – and that’s the point

Unlike some competitors, Asana is not claiming to deliver fully autonomous agents. “It’s a bit of both,” Shrivastava said. “Agentic AI is not ready for autonomous at this point. It’s human and AI. It will do the work that you expect it to do, and then humans play the role of a supervisor, checking its work and approving it. I think that really is the key.”

That philosophy also sets Asana apart from rivals such as Salesforce, whose Agentforce platform focuses more narrowly on CRM workflows. “I strongly believe that the differentiating factor is the context,” Shrivastava said. “Salesforce is not built for that context information. If you have that context, you’re able to feed AI that context across all of the work that’s happening across the organization. And it’s not about more context being provided to AI or less context. It’s about the right context being provided to AI.”

“I think that truly is the differentiating factor, in my opinion, plus the explainability” of AI Teammates, he added.

AI Teammates are currently available in public beta, with general availability expected in the first quarter of 2026. Asana is already planning future capabilities, including a “workflow gallery” of prebuilt templates to help customers automate more business processes without starting from scratch.

“The future enterprise no longer (comprises just) a human workforce – it’s a human and AI workforce,” Shrivastava said. “And soon enough, maybe we’ll all have our own personal AI agents.”

Author

×