We propose a technique for producing "visual explanations" for decisions from a large class of CNN-based models, making them more transparent.