Mrinank Sharma, who led the security measures research team at Anthropic, resigned less than a year after the unit officially launched. His departure sparked debate within the tech community, not only because of his senior role, but also the tone of his resignation letter. In it, Sharma warns that “the world is in peril,” referring to a series of overlapping and simultaneously evolving crises. Many readers interpreted this message as a broader warning about the rapid development of advanced AI systems.

In brief
- Former Anthropic security chief warns that AI capabilities are advancing faster than supervisory frameworks.
- Investor pressure and global rivalry are accelerating AI development priorities.
- Structural incentives often favor rapid deployment rather than careful governance.
- This resignation is part of a growing wave of departures of AI security leaders within large companies.
AI security: problematic incentives
Sharma oversaw security research related to Anthropic's large language model, Claude, widely considered a major competitor to OpenAI's ChatGPT.
The Security Measures Research Teamintroduced last February, had the mission of identifying and mitigating risks associated with the systems deployed by Anthropic. His work included the study of misuse scenarios, systemic failures and potential long-term societal consequences.
According to his letter, Sharma was working on defense mechanisms to reduce the risks of AI-assisted bioterrorism and helped write one of the company's first security reports. His latest research project focused on how AI assistants could influence human behavior or reshape fundamental aspects of identity.
It is important to clarify that Sharma did not accuse Anthropic of misconduct. Instead, he presented his decision as rooted in deeper moral and structural concerns about the overall direction of the sector.
He wrote that the business may reach a point where wisdom must evolve as quickly as technological might or risk being left behind. He also discussed the difficulties for organizations in ensuring that their stated values actually guide their decisions. Describing the current context as a “poly-crisis” fueled by a deeper “meta-crisis,” he used a philosophical register to express a sense of urgency.
Among the main concerns raised in his letter:
- AI capabilities are advancing faster than social and ethical readiness.
- Competitive pressure between companies and nations influences research priorities.
- Incentives that reward speed and scale rather than prudence.
- The long-term cultural and human impacts remain poorly understood.
Investor pressure and geopolitical rivalries around AI
Some commentators interpreted his comments as a sign of internal disagreements at Anthropic. Others say its concerns reflect broader tensions across the AI sector, rather than a conflict specific to a single company. By avoiding any direct accusation or mention of individuals, Sharma reinforces the idea that his concerns are systemic rather than personal.
In recent months, several leading researchers and policymakers have left major AI companies, often citing concerns about the pace of development. With global technology spending expected to reach around $5.6 trillion in 2026, with AI at the center of this dynamic, the stakes continue to rise.
Governments now view AI not only as a commercial advancement, but also as critical infrastructure linked to national security, economic productivity, and geopolitical influence. At the same time, companies face pressure from investors, quarterly performance targets and increased competition. These factors shape the context in which debates about AI safety take place.
An accelerated AI race
Industrial dynamics currently driving the development of AI include:
- Intense competition between the main laboratories to launch ever more powerful models.
- Investor pressure linked to valuation and market shares.
- The efforts of States to guarantee their technological leadership.
- Growing business demand for continuous model updates.
In this context, Sharma suggests that security teams may find it difficult to exert meaningful influence, even within organizations that publicly display their commitment to responsible AI. Structural incentives, he implies, often favor faster engineering at the expense of deeper ethical thinking.
It is worth noting that Sharma is not predicting disaster. His remarks focus on the notion of balance, emphasizing the need to accompany technological power with an equivalent level of wisdom. His tone remains measured, but clearly warning. As he leaves office, he indicates that his personal beliefs are no longer fully aligned with the current direction of the industry.
Anthropic did not indicate that the resignation reflected an internal conflict. However, it comes at a time of intensifying scrutiny from lawmakers, researchers and civil society, with calls for clearer safeguards as AI systems become ever more capable and widely deployed.
Maximize your Tremplin.io experience with our 'Read to Earn' program! For every article you read, earn points and access exclusive rewards. Sign up now and start earning benefits.
