
Vectara recognized for model development and AI knowledge management
Read more
Vectara recognized for model development and AI knowledge management
Read more

GPT4o and Gemini-1.5-Flash are fast and cheap, but hallucinate more

Featuring multilinguality, unlimited context window, and calibration – The Hughes Hallucination Evaluation Model (HHEM) v2 is a major upgrade from v1

Why the size of the model does not necessarily determine its likelihood to hallucinate

A new LLM achieves 0% hallucinations and is set to revolutionize RAG

See how Anthropic’s new Claude 3 LLM hallucinates compared to other foundation models in the Hughes Hallucination Evaluation Model (HHEM)

See how Google Gemma hallucinates compared to other foundation models in the Hughes Hallucination Evaluation Model (HHEM)

See how Phi 2 compares to Mixtral 8x7B and Titan Express in the Hughes Hallucination Evaluation Model (HHEM)

Vectara launches open-source Hallucination Evaluation Model (HEM) that provides a FICO-like score for grading how often a generative LLM hallucinates in Retrieval Augmented Generation (RAG) systems.
Connect our community channel.
Join our discussion channel.
Get news, company information.
Adopt best practices in projects.
Suggest your own ideas.
Ask your follow-up questions.
