Anthropic has introduced a new research tool called Anthropic Interviewer, an automated interview system powered by Claude that conducts large-scale, adaptive conversations to understand how people use and feel about AI. The system appears as a pop-up for Claude.ai users, inviting them to participate in a 10- to 15-minute study.
Anthropic frames the tool as part of a broader effort to incorporate public perspectives into AI development. By gathering qualitative and quantitative data outside the chat window, the company aims to better understand how AI fits into people’s work and aspirations – and to use those insights to shape future model development, policies, and collaborations.
The company tested the tool by running 1,250 interviews with workers across the general workforce, creative fields, and science. Across all groups, participants expressed mostly positive attitudes about AI’s usefulness but raised concerns about job displacement, trust, security, and maintaining personal identity in their work.
Workers generally wanted to automate routine tasks while preserving the parts of their jobs that define their expertise. Creatives reported strong productivity gains but also stigma from peers and deep anxiety about economic disruption. Scientists said they want AI that can generate new hypotheses and assist with experiment design, but today they trust it only for tasks like writing, coding, and summarizing literature.
Survey data reflected these mixed emotions: high satisfaction paired with notable worry and frustration. While most general-workforce and creative participants feared some form of job disruption, scientists reported little concern about replacement, citing tacit knowledge, human judgment, and security constraints.
Anthropic Interviewer functions in three stages – planning, interviewing, and analysis – and allows researchers to run interviews at a scale previously impossible. Anthropic is publicly releasing the interview transcripts (with consent) and plans to expand its research through partnerships with creatives, scientists, and teachers. Despite its promise, the company acknowledges limitations, including sample bias, self-reporting gaps, and the inability to generalize findings globally.