ChatGPT and other large language models (LLMs) often produce inaccurate information or "hallucinations."
"ChatGPT might give you articles by an author that usually writes about your topic, or even identify a journal that published on your topic, but the title, pages numbers [sic], and dates are completely fictional. This is because ChatGPT [3.5] is not connected to the internet, so has no way of identifying actual sources."
The videos and articles below explain what hallucinations are, why LLMs hallucinate, and how to minimize hallucinations through prompt engineering.
You will find more resources about prompt engineering and examples of good prompt engineering in this Guide under the tab "How to Write a Prompt for ChatGPT and other AI Large Language Models."
Attribution: The quotation was provided by the University of Arizona Libraries, licensed under a Creative Commons Attribution 4.0 International License.
IBM Expert Martin Keen explains what an LLM (AI) hallucination is, why LLMs hallucinate, and how to minimize hallucinations. [YouTube: 9 m, 37s]
Martin Keen from IBM Technology explains why Large Language Models (LLM's) provide answers that are "sometimes wrong or just plain weird." He demonstrates prompting techniques to improve the answers. [YouTube: 7 m, 36 s).
This blog post outlines research and writing tasks for which ChatGPT should be used and those tasks for which ChatGPT should not be used. Examples: do not ask ChatGPT for a list of sources or to write your paper, but use it to provide ideas for a particular topic, or revise an awkward sentence.