Tech Industry Launches Coalition for Secure AI

Tech Industry Launches Coalition for Secure AI

Some of the tech industry's biggest names are joining forces to ensure the safe development and deployment of AI. Google, OpenAI, Microsoft, Amazon, NVIDIA, Intel, and other major tech companies have formed the Coalition for Secure AI (CoSAI). Announced today at the Aspen Security Forum, this open-source initiative aims to set industry-wide standards for responsible AI development and deployment.

OASIS, a global standards body, will host CoSAI as it tackles the fragmented landscape of AI security practices. The coalition brings together industry leaders and academic experts to create a unified framework for secure AI systems.

"We need to democratize the knowledge and advancements essential for secure AI integration and deployment," said David LaBianca, Google's representative and CoSAI Governing Board co-chair. His statement highlights the urgent need for common ground in AI security across the tech industry.

CoSAI will focus on the entire lifecycle of AI systems, from building and integration to deployment and operation. The coalition aims to mitigate risks like model theft, data poisoning, prompt injection, and inference attacks - challenges that have grown more prominent as AI systems become more sophisticated and widespread.

Cisco's Omar Santos, also on the CoSAI Governing Board, emphasized collaboration: "We'll combine our expertise and resources to quickly develop robust AI security standards and practices that benefit the whole industry."

The coalition will start with three main workstreams:

  1. Software Supply Chain Security for AI Systems: This workstream will focus on enhancing composition and provenance tracking to secure AI applications.
  2. Preparing Defenders for a Changing Cybersecurity Landscape: Addressing investments and integration challenges in both AI and classical systems.
  3. AI Security Governance: Developing best practices and risk assessment frameworks to ensure comprehensive AI security.

These efforts will address issues like enhancing tracking of AI components, tackling integration challenges between AI and traditional systems, and developing best practices for AI security risk assessment.

CoSAI's formation comes at a critical time in AI development. As AI technologies rapidly advance and spread across various sectors, the need for standardized security practices has become urgent. Currently, developers and organizations struggle with inconsistent and isolated guidelines, making it hard to implement strong security measures.

Industry leaders strongly support the initiative. Google's Heather Adkins, Vice President and Cybersecurity Resilience Officer, said: "We've used AI for years and see its potential for defenders, but we also recognize the opportunities for adversaries. CoSAI will help organizations of all sizes integrate AI securely and responsibly."

Microsoft's Yonatan Zunger, CVP of AI Safety & Security, stressed the company's commitment to prioritizing safety and security in AI development. "Through CoSAI, Microsoft continues its mission to empower everyone and every organization to do more - securely," Zunger stated.

The coalition welcomes all practitioners and developers to contribute to its open-source community. This inclusive approach aims to speed up the development of secure AI practices across the industry by freely sharing knowledge and resources.

As AI reshapes industries and society, initiatives like CoSAI play a crucial role in building a foundation of security and trust for this technological revolution. The impact of this coalition will unfold in the coming months and years, marking a significant step towards a more secure and responsible AI future.

Let’s stay in touch. Get the latest AI news from Maginative in your inbox.

Subscribe