Google Boosts AI Security with Expanded Bug Bounties and Open-Source Protections for AI Supply Chains

Google Boosts AI Security with Expanded Bug Bounties and Open-Source Protections for AI Supply Chains
Image Credit: Google

Google today announced expansions to its bug bounty program and open-source security efforts, aiming to further secure frontier AI systems. This move come as AI adoption grows rapidly across industries, elevating the urgency of identifying potential vulnerabilities.

The tech giant said it is extending its existing Vulnerability Rewards Program (VRP) to include incentives for uncovering attack scenarios specific to generative AI models. Google will reward qualifying bug submissions that help uncover unfair bias, model manipulation, hallucinations, and other risks endemic to AI systems.

To guide researchers, Google published detailed criteria of what constitutes an in-scope vulnerability. This includes traditional software bugs as well as emerging AI-specific attack vectors like training data poisoning, tampering of machine learning pipelines, and injection of harmful content.

According to Google, scrutinizing AI systems requires reporting guidelines distinct from traditional digital security. The company worked with internal AI red teams to develop categories tailored to AI's novel security challenges.

In a blog post, Google stated its aim is to "incentivize more security research while applying supply chain security to AI." The company issued over $12 million in VRP rewards last year.

In addition to expanded bug bounties, Google revealed new open-source security measures intended to protect AI supply chains.

The company is leveraging two key technologies - SLSA and Sigstore - to enable universal verification of AI system provenance and integrity.

SLSA refers to supply-chain levels for software artifacts. It's a set of controls and standards that track how AI models are developed. This metadata provides transparency into training data, dependencies, and other inputs used during model creation.

Sigstore offers cryptographic signing of software and models. This helps confirm artifacts come from trusted sources and weren't tampered with post-publication.

According to Google, applying SLSA and Sigstore to machine learning pipelines mirrors proven techniques for securing conventional software supply chains. The company believes transparency and verifiability will be similarly vital for AI as adoption spreads.

Google says it welcomes working with the open-source community to establish standards that meet the unique security needs of evolving AI systems. The company sees its expanded VRP and supply chain security measures as spurs toward making AI safer through industry-wide cooperation.

Let’s stay in touch. Get the latest AI news from Maginative in your inbox.

Subscribe