Log in
Enquire now
‌

Artificial intelligence hallucination

Artificial intelligence hallucination occurs when an AI model produces output that is plausable but incorrect, often resulting from training data limits.

OverviewStructured DataIssuesContributors

Contents

Other attributes

Also Known As
AI hallucination
artificial hallucination
Industry
Generative AI
Generative AI
Parent Classification
Artificial Intelligence (AI)
Artificial Intelligence (AI)
Related Industries
Natural language processing (NLP)
Natural language processing (NLP)
Computer Vision
Computer Vision
Overview

Artificial intelligence hallucination is a phenomenon when AI models produce results that differ from what was anticipated. AI hallucinations include large language models returning factually incorrect, irrelevant, or nonsensical responses. With the increased use of generative AI systems and models, AI hallucinations have become a significant area of concern, hindering performance and raising safety concerns. For example, AI models are finding use in medical applications where hallucinations pose a risk to patient health. Hallucinations could also produce privacy violations, and there have been instances where language models have returned sensitive personal information from the training data used to train it. There are many active efforts to address AI hallucinations and mitigate their impact.

The phrase "hallucination" in the context of AI was first used in a March 2000 paper on computer vision from S Baker and T Kanade, titled "Hallucinating Faces." When first used, it carried a more positive meaning as something to be taken advantage of in computer vision. Some AI models are still trained to intentionally produce outputs unrelated to any real-world input. For example, text-to-art generators produce novel images not based on real-world data. However, recent works typically refer to hallucination as a specific type of error in language model responses, image captioning, or object detection.

AI hallucinations can occur for a number of reasons:

  • Adversarial examples—input data that tricks an AI model into misclassification
  • Inaccurate decoding from the transformer (architecture used for generative AI models)
  • Inputs that do not match any data that the algorithm was trained on

AI hallucination can be divided into intrinsic and extrinsic examples. Intrinsic hallucinations are when the generated output contradicts the source content. Extrinsic hallucinations are when the generated output cannot be verified from the source content.

Timeline

No Timeline data yet.

Further Resources

Title
Author
Link
Type
Date

Artificial Hallucinations in ChatGPT: Implications in Scientific Writing

Hussam Alkaissi, Samy I McFarlane

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9939079/

Web

February 19, 2023

Exploring the Boundaries of Reality:

Investigating the Phenomenon of Artificial

Intelligence Hallucination in Scientific Writing

Through ChatGPT References

Sai Anirudh Athaluri, Sandeep Varma Manthena, V S R Krishna Manoj Kesapragada, Vineel Yarlagadda, Tirth Dave, Rama Tulasi Siri Duddumpudi

https://assets.cureus.com/uploads/original_article/pdf/148687/20230411-12310-1qghs0j.pdf

April 11, 2023

Retrieval Augmentation Reduces Hallucination in Conversation

Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela, Jason Weston

https://arxiv.org/abs/2104.07567

April 15, 2021

Survey of Hallucination in Natural Language Generation

Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Yejin Bang, Wenliang Dai, Andrea Madotto, Pascale Fung

https://arxiv.org/abs/2202.03629

February 8, 2022

References

Find more entities like Artificial intelligence hallucination

Use the Golden Query Tool to find similar entities by any field in the Knowledge Graph, including industry, location, and more.
Open Query Tool
Access by API
Golden Query Tool
Golden logo

Company

  • Home
  • Press & Media
  • Blog
  • Careers
  • WE'RE HIRING

Products

  • Knowledge Graph
  • Query Tool
  • Data Requests
  • Knowledge Storage
  • API
  • Pricing
  • Enterprise
  • ChatGPT Plugin

Legal

  • Terms of Service
  • Enterprise Terms of Service
  • Privacy Policy

Help

  • Help center
  • API Documentation
  • Contact Us