
Vectara recognized for model development and AI knowledge management
Read more
Vectara recognized for model development and AI knowledge management
Read more
Today, we’re happy to announce that Vectara’s hybrid search capabilities have just gotten even stronger with the ability to apply updates to which metadata fields can have filters and what their data types are. This blog will walk through some common use cases and how you can use this new feature.

AI has raised the stakes for customer communications with your Brand. Are you hitting the mark? Or do you want to make sure your strategy isn’t a failure to launch. Learn what makes a smart chatbot

Vectara launches open-source Hallucination Evaluation Model (HEM) that provides a FICO-like score for grading how often a generative LLM hallucinates in Retrieval Augmented Generation (RAG) systems.

Vectara launches the open source Hughes Hallucination Evaluation Model (HHEM) and uses it to compare hallucination rates across top LLMs including OpenAI, Cohere, PaLM, Anthropic’s Claude 2 and more.

The top large language models along with recommendations for when to use each based upon needs like API, tunable, or fully hosted.

In RAG, retrieving the right facts from your data is crucial, and choosing the right embedding model to power your retrieval matters!

AI-powered search is everywhere – new projects, products and companies are appearing to solve age-old challenges. But how ‘good’ is an AI-powered search engine – and how can we measure this?

Hybrid search has the power to vastly improve in-app experience allowing users to find what they are looking for quickly, and allows tuning between semantic and lexical search

Learn about the benefits of Vectara’s new embedding model “Boomerang,” including how our version of Retrieval Augmented Generation (RAG), called Grounded Generation, is smarter than other systems
Connect our community channel.
Join our discussion channel.
Get news, company information.
Adopt best practices in projects.
Suggest your own ideas.
Ask your follow-up questions.
