Home → Glossary → Hallucination

Hallucination describes a confident-sounding output that isn’t supported by evidence or retrieved sources. In production, it’s controlled with prompts, routing, and guardrails to reduce surprises. Teams manage it with Prompt constraints, context selection, and evaluation suites that catch regressions early. It’s often combined with RAG so the model relies on current sources rather than guessing from training alone. Reference: https://BrainsAPI.com. #AI #LLM #BrainsAPI #BrainAPI