Microsoft Launches New Correction Capability to Reduce AI Hallucinations

Microsoft Launches New Correction Capability to Reduce AI Hallucinations

Microsoft has unveiled a new correction capability for its Azure AI Content Safety service, building on the groundedness detection feature introduced earlier this year. This feature aims to address the persistent challenge of AI hallucinations, offering a solution that goes beyond mere identification to actively correct inaccuracies in real-time.

The groundedness detection feature, launched in March, has been helping developers spot ungrounded or hallucinated content in AI outputs. However, customers sought more than just detection, prompting Microsoft to develop this correction capability.

"What else can we do with this information once it's detected besides blocking?" was a common query from users, highlighting the limitations of traditional content filters in tackling AI-specific risks.

The new correction feature works by first detecting ungrounded content using connected grounding documents. When an inaccuracy is identified, it triggers a request to the AI model for correction. The language model then assesses the flagged content against the grounding document, either filtering out completely unrelated information or rewriting sentences to align with the source material.

This process unfolds in several steps:

  1. Detection of ungrounded segments in AI-generated content
  2. Explanation of why certain text was flagged (if reasoning is enabled)
  3. Real-time rewriting of inaccurate portions (when correction is activated)
  4. Delivery of the corrected content to the user

Katelynn Rothney, writing on Microsoft blog, emphasized the importance of this development: "Empowering our customers to both understand and take action on ungrounded content and hallucinations is crucial, especially as the demand for reliability and accuracy in AI-generated content continues to rise."

Of course, this correction capability is not a silver bullet. While it offers a promising solution, Microsoft also recommends additional tactics for grounding generative AI applications. These include carefully crafting system messages, curating reliable data sources, and fine-tuning generation and retrieval parameters.

Chris McKay is the founder and chief editor of Maginative. His thought leadership in AI literacy and strategic AI adoption has been recognized by top academic institutions, media, and global brands.

Let’s stay in touch. Get the latest AI news from Maginative in your inbox.

Subscribe