Vectara launches the open source Hughes Hallucination Evaluation Model (HHEM) and uses it to compare hallucination rates across top LLMs including OpenAI, Cohere, PaLM, Anthropic’s Claude 2 and more.
Read moreCategories
Blog - Application Development - Page 2
All posts
The Latest Benchmark Between Vectara, OpenAI and Cohere’s Embedding Models
Vectara’s Boomerang stands out as the optimal choice for production use cases, balancing precision, embedding size, and storage costs effectively.
Introducing Vectara's v2 API
Today, we’re excited to announce that we’re introducing a brand new REST API for Vectara: API v2.
Congratulations to the Winners of the First Ever Built by Vectara Contest
We are so proud of our Built By Vectara Contestants.
Unlocking the State-of-the-Art Reranker: Introducing the Vectara Multilingual Reranker_v1
In the ever-evolving landscape of RAG and information retrieval, a balance of precision, recall, and latency will make or break the applications. We are excited to introduce our latest innovation, the Multilingual Reranker_v1. This state-of-the-art reranker, significantly enhances the precision of retrieved results across both English and multilingual datasets.
Deep Dive Into Vectara Multilingual Reranker v1, State-of-the-Art Reranker Across 100+ Languages
This blog post focuses on the technical details of our latest iteration of Multilingual Reranker v1 which enables state-of-the-art retrieval performance across 100+ languages. Our latest model provides qualitative performance on par with industry leaders like Cohere and surpassing the best open source models. Our latest model also provides users with blazing-fast inference at minimal cost, with the additional capability of rejecting irrelevant text based on score-based cut-offs.
Vectara Launches 2 Powerful New Generative Capabilities
Today, we’re proud to announce 2 significant enhancements to our generative response capabilities. These new features are aimed at significantly improving the developer experience as well as giving Vectara administrators better ability to review and analyze conversations their users have had with the system.
Gen AI Platform Build vs Buy – Part I: Options and Tradeoffs
When running Gen AI applications, it is important to decide whether to build or buy the RAG infrastructure on which to run your application. This article helps you to make that decision.
How Sankofa was Built with Vectara
Leveraging Vectara’s API to build Sankofa, a browser extension that allows anyone to chat with their web content history
Say Goodbye to Delays: Expedite Your Experience with Stream Query
Discover how Stream Query reduces latency by delivering Gen AI responses in real-time chunks, eliminating the frustration of waiting for LLM’s response
Connect with
our Community!
- Join us on Discord
Discord.
Connect our community channel.
- Join us on Github
Github.
Join our discussion channel.
- Follow us on X / Twitter
X / Twitter.
Get news, company information.
- Follow us on LinkedIn
LinkedIn.
Adopt best practices in projects.
- Join us on Discuss
Discuss.
Suggest your own ideas.
- Write us on E-mail
E-mail.
Ask your follow-up questions.