Anthropic ditches safety promises
(CNN)– Ai giant Anthropic is rolling back some of its key safety guidelines.
The company posted a new policy Tuesday, Feb 24, with the caveat that it will likely evolve.
One key change is the removal of a commitment not to train models so powerful the company can’t control them.
It’s a surprising move for a company founded by workers who defected from OpenAi over such concerns.
Anthropic has a history of placing guardrails on its own Ai and issuing guidance for the industry at large.