- Hallucinations - If a chatbot doesn't have the necessary data to answer a question it may simply make up answers to prompts, which is called a hallucination. For example, if you prompt a chatbot for books and articles on a particular topic, it may provide citations that sound plausible but aren't real.* The only way to know whether a source generated by AI is accurate is to search for it yourself. Therefore, you would be better off using the Library databases to search for sources.
- Attribution - Chatbots create responses based on enormous amounts of synthesized data, which makes it impossible to know the origin of the information.
- Replication - Because chatbots produce a different response each time they are prompted, there is no way to reliably recreate the same results. This is particularly problematic when it comes to scientific research.
- Currency - Access to up-to-date information varies from chatbot to chatbot, which makes it difficult to know whether information is current.
- Bias - Chatbots can provide biased information because of training data that focuses on specific demographics.
Want research help from a human? Ask a librarian!
* Further reading on misuse of AI and fake citations:
- "Misinformation expert cites non-existent sources in Minnesota deep fake case"
https://minnesotareformer.com/2024/11/20/misinformation-expert-cites-non-existent-sources-in-minnesota-deep-fake-case/
- "Judge rebukes Stanford misinformation expert for using ChatGPT to draft testimony"
https://minnesotareformer.com/2025/01/14/judge-rebukes-stanford-misinformation-expert-for-using-chatgpt-to-draft-testimony/