Industry attributes
Other attributes
Explainable artificial intelligence, also known as interpretable AI or XAI, is a set of processes and methods that help human users comprehend and trust the output of an AI model. As AI algorithms become more advanced, they produce results that are challenging for humans to comprehend. When users cannot retrace how the algorithm came to the result, the model becomes a "black box" that is impossible to interpret. Black box models are created directly from data, and even the engineers or data scientists who built them cannot understand or explain what is happening inside to determine how they arrived at a result.
As machine learning and AI techniques find wider use, unexplainable "black box" machine learning models make it difficult to reproduce a result. Explainable AI is used to describe an AI model and its expected impact and potential biases, helping to characterize the accuracy of the model as well as its fairness, transparency, and outputs. Explainable AI is crucial to responsible AI development, helping to build trust and confidence in AI models and allowing human users to understand how the model reached the final output.
There are multiple approaches to implementing explainable AI. The U.S. National Institute of Standards and Technology (NIST) defines four principles driving explainable AI:
- Explanation—Systems deliver accompanying evidence or reason(s) for all outputs.
- Meaningful—Systems provide explanations that are understandable to individual users.
- Explanation accuracy—The explanation correctly reflects the system’s process for generating the output.
- Knowledge limits—The system operates only under the conditions for which it was designed or when its output has achieved sufficient confidence levels.
NIST states that explanations can range from simple to complex and that they depend upon the consumer in question. NIST demonstrates some explanation types using five sample explainability categories user benefit, societal acceptance, regulatory and compliance, system development, and owner benefit.
Explainable AI helps developers ensure that the system is working as expected; this might be necessary to meet regulatory standards, or it might be important in allowing those affected by a decision to challenge or change that outcome.
There are many research efforts into explainable AI by academic institutions, the private sector, and governments. DARPA—the research and development arm of the U.S. military—launched its Explainable Artificial Intelligence program in August 2016.
A number of companies offer tools to developers to help implement explainable AI processes into their models. These companies include Google and others.