Anthropic Calls for Stringent Security Safeguards for Frontier AI Models

Anthropic Calls for Stringent Security Safeguards for Frontier AI Models
Image credit: Anthropic

As artificial intelligence rapidly advances, the line between cutting-edge innovation and national security is becoming increasingly blurred. Anthropic is urging labs developing frontier AI models to implement stringent cybersecurity measures to prevent theft or misuse of this potentially world-altering technology.

The AI safety startup recently laid out recommendations for securing advanced models, while also offering insight into the steps they are taking to ensure their own models are developed securely.

Anthropic says these controls are needed due to AI's growing strategic importance and warned that future advanced AI models have the potential to profoundly impact economic and national security on global and domestic scales.

"Given the strategic nature of this technology, frontier AI research and models must be secured to levels far exceeding standard practices for other commercial technologies."

Anthropic pointed to two core practices that labs should adopt:

  • Multi-party authorization to AI-critical infrastructure design. This practice, used across various domains, ensures that no single person has unchecked access to production-critical environments. Instead, time-limited access can be granted on the basis of a justified business request made to a coworker. Even small emerging labs with limited resources can implement this type of access control.
  • Secure model development framework. This encourages the adoption of the NIST Secure Software Development Framework (SSDF) and the Supply Chain Levels for Software Artifacts (SLSA). With successful integration, these frameworks not only ensure a secure model development environment but also provide a chain of custody for deployed AI systems.

Together, these controls can provide robust model provenance and prevent unauthorized access or changes.

Anthropic says immediate focus should be on protecting advanced models, model weights, and the research that contributes to their development. Both governments and frontier AI labs must consider treating the advanced AI sector as "critical infrastructure". This involves developing robust cybersecurity best practices, fostering public-private partnerships, and eventually, using government procurement or regulatory powers to enforce compliance.

The company is leading by example by implementing two-party controls, SSDF, SLSA, and other cybersecurity best practices. As model capabilities scale, they believe they will need even more stringent security protections, necessitating an iterative process in consultation with government and industry.

The company acknowledges the immense potential of AI to benefit humanity, but also recognizes the risks associated with the technology if not handled thoughtfully. With AI progress rapidly accelerating, responsible development demands proactive security to prevent catastrophic misuse. Anthropic's call represents a vital conversation on securing AI for the future.

Let’s stay in touch. Get the latest AI news from Maginative in your inbox.

Subscribe