Skip to content
Get Started. Free Consult
Glossary · AI & Development

Hallucination

When an AI model generates output that is plausible but factually incorrect or fabricated.

In detail

Hallucination is the failure mode where a language model generates fluent, confident output that contains invented facts, fake citations, made-up function signatures or wrong numbers. The model is not lying in any intentional sense. It is pattern-matching to what plausible text in that context would look like, even when it does not have grounded knowledge. Hallucination rates vary by task, model and prompt design and have decreased substantially with newer models, but they have not been eliminated.

Why it matters for Australian business

For Australian businesses hallucination is a legal and operational risk. An AI assistant that invents a clause from your terms of service, fabricates a Privacy Act citation, or quotes a non-existent ATO ruling can mislead a customer or expose the business to liability. Mitigations include retrieval-augmented generation (so the model has real source documents), structured output validation, human review on consequential outputs and clear disclosure that AI output requires verification.

How we help with this

Related terms

← All glossary terms

Want to talk through how this applies to your business? Book a free consult.