Vectara launches the open source Hughes Hallucination Evaluation Model (HHEM) and uses it to compare hallucination rates across top LLMs including OpenAI, Cohere, PaLM, Anthropic’s Claude 2 and more.
Read moreCategories
Blog - page 6
All posts
Automating Hallucination Detection: Introducing the Vectara Factual Consistency Score
Vectara’s Factual Consistency Score (FCS) offers an innovative and reliable solution for detecting hallucinations in RAG. It’s a calibrated score that helps developers evaluate hallucinations automatically. This tool can be used by our customers to measure and improve response quality. The FCS can also be used as a visual cue of response quality shown to the end-users of RAG applications
Ingesting Data into Vectara Using PyAirbyte
How to run custom transformations on data from any Airbyte data source ingested into Vectara
User Interfaces for AI Applications
Discover Vectara’s Four Principles for UI Development in the Age of AI.
HHEM | Flash Update: Anthropic Claude 3
See how Anthropic’s new Claude 3 LLM hallucinates compared to other foundation models in the Hughes Hallucination Evaluation Model (HHEM)
Vectara Completes Penetration Testing
Vectara, the Trusted GenAI Platform for All Builders, recently completed its annual Penetration Testing as a part of its ongoing security commitment to its customers
Introducing New UI Tools for Vectara Chat – Create UI and React-chatbot
Utilize Create-UI for a comprehensive, full-screen chat application integrated with Vectara’s chat capabilities. For more compact solutions, embed React-Chatbot within your React application, offering users a sleek, minimally intrusive chatbot widget.
Vectara’s new Personal API keys
Why personal API keys matter and how to use them
Security Guidance for All Authentication Methods
At Vectara, we recognize the importance of robust security in all of our authentication methods. While OAuth remains the gold standard in security with features like automated expiry and the JWT token flow, we understand it’s not always feasible for every user or scenario.
HHEM | Flash Update: Google Gemma
See how Google Gemma hallucinates compared to other foundation models in the Hughes Hallucination Evaluation Model (HHEM)
Connect with
our Community!
- Join us on Discord
Discord.
Connect our community channel.
- Join us on Github
Github.
Join our discussion channel.
- Follow us on X / Twitter
X / Twitter.
Get news, company information.
- Follow us on LinkedIn
LinkedIn.
Adopt best practices in projects.
- Join us on Discuss
Discuss.
Suggest your own ideas.
- Write us on E-mail
E-mail.
Ask your follow-up questions.