You are definitely not alone. A lot of teams add RAG expecting hallucinations to disappear, then feel frustrated when the answers still drift or invent details. The uncomfortable truth is that retrieval only helps when the right information is found, ranked properly, and inserted into the prompt in a form the model can actually use. In many systems, the model is not hallucinating out of nowhere. It is filling gaps left by weak chunking, vague embeddings, stale documents, or noisy retrieval results. If the evidence is partial, the answer will often sound polished but still be wrong in important ways. The smartest move is to separate retrieval evaluation from generation evaluation. Check whether the correct evidence appeared in the top results before blaming the model. Once you do that, the problem becomes much easier to debug because you can see whether the issue starts in search, context assembly, or final reasoning.Is anyone else seeing random hallucinations even after adding RAG or is it just us
