Vectara launches the open source Hughes Hallucination Evaluation Model (HHEM) and uses it to compare hallucination rates across top LLMs including OpenAI, Cohere, PaLM, Anthropic’s Claude 2 and more.
Read moreCategories
Blog - Research
All posts
Is Semantic Chunking worth the computational cost?
Discover why fixed-size chunking often outperforms semantic chunking in real-world RAG systems. This study explores the trade-offs between simplicity, computational efficiency, and retrieval accuracy, challenging the assumption that semantic chunking is always the superior choice.
Correcting Hallucinations in Large Language Models
In this blog post, we share the results of our initial experiments aimed at correcting hallucinations generated by Large Language Models (LLMs). Our focus is on the open-book setting, which encompasses tasks such as summarization and Retrieval-Augmented Generation (RAG).
Deep Dive Into Mockingbird: A RAG and Structured Output Focused LLM
In this blog post, we introduce Mockingbird, Vectara’s Retrieval Augmented Generation (RAG) and structured output focused LLM, and do a technical deep dive into its performance and discuss its technical capabilities.
Top Large Language Models (LLMs)
The top large language models in the Summer of 2024, along with recommendations for when to use each based upon needs like API, tunable, or fully hosted.
Generative AI for Legal Teams
Learn how Legal teams can use Generative AI to safely increase efficiency, decrease costs, and improve the effectiveness of paralegal research
Top Large Language Models (LLMs): GPT-4, LLaMA 2, Mistral 7B, ChatGPT, and More
The top large language models along with recommendations for when to use each based upon needs like API, tunable, or fully hosted.
Researchers and Analysts: Enhancing Knowledge and Insights with GenAI-powered Answers
Today, researchers and analysts can be the beneficiaries of new ideas from massive research archives that may have never been possible in the past due to GenAI-powered hybrid search which can return pinpoint accurate results from 1,000’s of documents and sources, in any language and in an instant.
A Reference Architecture for Grounded Generation
What the current GenAI stack looks like, and the role of emerging GenAI platforms like Vectara…
Fine-Tuning vs Retrieval Augmented Generation
Which option is better for GenAI applications with your data
Connect with
our Community!
- Join us on Discord
Discord.
Connect our community channel.
- Join us on Github
Github.
Join our discussion channel.
- Follow us on X / Twitter
X / Twitter.
Get news, company information.
- Follow us on LinkedIn
LinkedIn.
Adopt best practices in projects.
- Join us on Discuss
Discuss.
Suggest your own ideas.
- Write us on E-mail
E-mail.
Ask your follow-up questions.