Skip to Main Content
Online Library
NAU Logo

Artificial Intelligence (AI)

What are hallucinations?

ChatGPT and other large language models (LLMs) often produce inaccurate information or "hallucinations." 

"ChatGPT might give you articles by an author that usually writes about your topic, or even identify a journal that published on your topic, but the title, pages numbers [sic], and dates are completely fictional. This is because ChatGPT [3.5] is not connected to the internet, so has no way of identifying actual sources."

The videos and articles below explain what hallucinations are, why LLMs hallucinate, and how to minimize hallucinations through prompt engineering. 

You will find more resources about prompt engineering and examples of good prompt engineering in this Guide under the tab "How to Write a Prompt for ChatGPT and other AI Large Language Models."

Attribution: The quotation was provided by the University of Arizona Libraries, licensed under a Creative Commons Attribution 4.0 International License.

Global site tag: