Vectara

Categories

Blog - page 4

All posts

Correcting Hallucinations in Large Language Models
Research

Correcting Hallucinations in Large Language Models

In this blog post, we share the results of our initial experiments aimed at correcting hallucinations generated by Large Language Models (LLMs). Our focus is on the open-book setting, which encompasses tasks such as summarization and Retrieval-Augmented Generation (RAG).

Utkarsh JainSuleman KaziOfer Mendelevitch
Utkarsh Jain,Suleman Kazi,Ofer Mendelevitch
Before you go...

Connect with
our Community!