Anthropic, a startup developing ethical AI technology, today announced the release of a new and improved version of their AI assistant Claude. Claude v1.3 is designed to be safer, less vulnerable to adversarial attacks, and provide equal or better performance across all domains compared to previous versions, according to the company.
Last month, Anthropic first began rolling out broad public access to Claude, their AI assistant. Claude was built using a technique called Constitutional AI to ensure the system behaved ethically and avoided potential harms.
In an email to customers, the company said that crowdworkers who evaluated the the new Claude-v1.3 model, preferred it two to one for precise instruction following and coding, and five to one for non-English dialogue and writing.
"We are offering a new version of our model, Claude-v1.3, that is safer and less susceptible to adversarial attacks, " Anthropic said in a series of tweets announcing the new version. "For businesses using Claude, capabilities in all domains should stay the same or improve as you upgrade from previous versions. We always work to improve safety and performance in tandem."
Anthropic also announced they have switched to a token-based pricing model for Claude starting on April 15. The new pricing model charges based on the number of tokens, typically a few characters in length, instead of characters. Anthropic said that for English language use, prices may decrease slightly, while non-English use could see variable price increases depending on the language. Enterprise customers will remain on character-based pricing until their existing contracts end.
Anthropic said they have not yet finalized a deprecation plan for Claude v1.0 through v1.2 but will announce one shortly. Customers using the Claude API and integrations like Slack will now interface directly with Claude v1.3.
The release of Claude v1.3 comes as tech companies face increasing scrutiny over the safety, ethics and governance of AI systems. Anthropic aims to set themselves apart by prioritizing transparency and safety in AI assistant Claude.