Webinar
Introducing the Factual Consistency Score: Your AI Evaluator
Recently, Vectra introduced an evolved version of its popular open-source Hughes Evaluation Model (HHEM), which detects the level of hallucinations in popular LLMs and in generated responses from those systems. It’s a calibrated score that helps developers evaluate hallucinations automatically. The new featurre can be used by our customers to measure and improve response quality. The FCS can also be used as a visual cue of response quality shown to the end-users of RAG applications
The Factual Consistency Score is Vectara’s integration of the hallucination detection model into its generative AI platform. Machine Learning Team Lead Forrest Bao will show us the new FCS score but in the Vectara UI and through the API.
Watch Now