Google recently unveiled new initiatives utilizing AI to tackle the proliferation of misinformation online and empower users with tools to assess content credibility. These efforts highlight the company's ongoing attempts to balance open access to information with protecting users from deception and abuse.
Rapid Prototyping of AI to Identify Harmful Content
Amanda Storey, Director Trust and Safety at Google explained how the company is leveraging large language models like PaLM 2 to quickly build and train smaller, more specialized AI that can find specific kinds of abuse on their products. By shortening development timelines from months/weeks to days, Google aims to respond faster to emerging abuse trends. Early testing suggests this technique shows promise for accelerating content moderation, especially around rapidly evolving risks.
However, transparency concerns persist regarding these AI models' inner workings and potential for errors. Critics also question whether AI can effectively moderate content given its limited ability to comprehend nuance and contextual cues.
Google also highlighted some of the measures it is implementing to reduce misinformation and improve the credibility of AI-generated content. This includes rolling out new tools and updating their policies—all aimed at providing users with a clearer understanding of the content they encounter online.
Earlier this year, Google announced "About this Image", a tool that provides users with contextual insights like an image's online history, its metadata (including whether it is AI-generated), and details on how other sites use and describe it. This aims to reduce images being used out-of-context and improve visual media literacy. About this image is now being rolled out to English language users globally in Google Search.
Google has also expanded the capabilities of Fact Check Explorer, a tool designed to help journalists and fact-checkers investigate suspect images and claims. A new beta version allows users to upload images or copy image URLs to determine if they have been fact-checked elsewhere. It also provides an overview of how that image has been used in different contexts online over time.
The digital landscape is riddled with challenges posed by misinformation and lack of trust. Google's new tools represent a proactive approach to addressing these concerns. However, the company acknowledges that no single entity can address these complex challenges alone. And while large language models also show promise for accelerating content moderation and empowering user verification, technology alone cannot solve these systemic issues.
Ultimately, combating misinformation requires ongoing diligence and coordinated efforts between tech companies, journalists, fact-checkers, academics, policymakers and civil society. There are no easy solutions, but a shared commitment to user empowerment, media literacy and preserving online credibility will be key.