The Future of Life Institute, which famously called for a six-month or longer moratorium on AI development soon after the launch of ChatGPT, is grading the safety practices of companies building state-of-the-art AI models.
Its Winter 2025 AI Safety Index gives Anthropic, OpenAI and Google DeepMind the highest marks among eight major frontier-model developers, but still rates them only in the C range, underscoring broad industry shortcomings in transparency and safety practices.

The remaining companies – xAI, Z.ai, Meta, DeepSeek and Alibaba Cloud – received grades of D or below across most categories. The index highlights persistent weaknesses in risk assessment, safety frameworks and information sharing, driven by limited disclosure and a lack of rigorous evaluation processes.
The report warns that existential safety remains the most critical gap, with no company presenting concrete plans to control or align potential AGI-level systems. An independent expert panel evaluated 35 indicators using public evidence and survey responses, aiming to increase accountability as competitive pressures push rapid AI development.
Read the report.