Fundamentals

Grounding

Quick Answer

Providing an LLM with factual reference documents to reduce hallucination and improve accuracy.

Grounding means anchoring the model's outputs to specific source documents or data. Instead of relying on training data, the model generates responses based on provided context. This is the core principle behind retrieval-augmented generation (RAG). Grounding dramatically reduces hallucination because the model can cite specific passages. It also enables the model to work with real-time data, proprietary information, and knowledge outside its training set. Effective grounding requires careful document retrieval and clear instructions for referencing sources.

Last verified: 2026-04-08

Compare models

See how different LLMs compare on benchmarks, pricing, and speed.

Browse all models →