Microsoft Unveils Tool to Combat AI Hallucinations
Since the introduction of ChatGPT in 2022, AI chatbots have experienced significant growth in both capabilities and user adoption. However, a common challenge these systems face is the tendency to produce "hallucinations," or factually incorrect information. Microsoft has announced a new tool designed to tackle this issue.
In a recent blog post, the company introduced "Correction," a feature aimed at automatically rectifying inaccuracies found in AI-generated text. The tool begins by identifying text segments containing factual errors through a process called “Groundedness Detection.” This feature is engineered to pinpoint content that lacks a basis in verified information.
Once an error is flagged, Correction fact-checks the information against a reliable source, which may include documents or uploaded transcripts. Groundedness Detection, launched earlier this year, draws parallels to similar functionalities in Google’s Vertex AI, which enables users to validate models with third-party data or Google Search results.
Correction is integrated into Microsoft’s Azure AI Content Safety API, which is currently available in a preview phase. It can be utilized alongside various AI models, including Meta’s Llama and OpenAI’s GPT-4o. Despite these advancements, experts caution that tools like Correction may not fully address the underlying causes of hallucinations.
Researcher Os Keyes from the University of Washington commented on the initiative, noting that while it may mitigate certain issues, it could also introduce new challenges.