AI models frequently ‘hallucinate’ on legal queries, study finds

Generative artificial intelligence (AI) models frequently produce false legal information, with so-called “hallucinations” occurring between 69 percent to 88 percent of the time, according to a recent study. Large language models (LLMs) — generative AI models, like ChatGPT, that are trained to understand and produce human language content — have previously been known to “hallucinate” … Read more