├── README.md ├── Visualizing convolutional input └── ostrich.png ├── Visualizing saliency maps ├── original.png ├── saliency_map.png ├── saliency_map_filter.png ├── saliency_map_gray.png ├── saliency_map_smooth.png └── saliency_map_smooth_filter.png └── Visualizing+image+classification+models+using+Saliency+Maps+and+Optimizing+Image.ipynb /README.md: -------------------------------------------------------------------------------- 1 | ## Visualising Image Classification Models and Saliency Maps 2 | 3 | This project is based on the paper -Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps 4 | by Karen Simonyan, Andrea Vedaldi, Andrew Zisserman Visual Geometry Group, University of Oxford.(https://arxiv.org/pdf/1312.6034.pdf) 5 | 6 | Here is a Keras implementation of above paper from scratch in the single notebook provided here. 7 | 8 | The paper and project is divided into two parts:- 9 | 1) Visualizing convolutional model input 10 | 2) Visualizing saliency maps 11 | 12 | **Visualizing convolutional model input** 13 | 14 | Here, we will see the technique for visualising the class models learnt by the image classification 15 | ConvNets. Given a learnt classification ConvNet and a class of interest, the visualisation 16 | method consists in numerically generating an image, which is representative of the class in terms 17 | of the ConvNet class scoring model. We are going to use VGG16 pretrained network or carrying out this technique. 18 | This paper lets us know what are the features expected in input image to maximize the output node score. 19 | 20 | What an ostrich looks like - 21 | 22 | ![image](https://user-images.githubusercontent.com/20341653/43385196-b81e1e4e-93fd-11e8-9750-03ff140715a0.png) 23 | 24 | Visualization expected by output node- 25 | 26 | ![ostrich](https://user-images.githubusercontent.com/20341653/43384947-1da9357e-93fd-11e8-9363-89ed7eccc720.png) 27 | 28 | 29 | **Visualizing Saliency maps** 30 | 31 | In saliency maps, we calculate the derivative of output of softmax class node with respect to the input image. 32 | Here we know the category of image beforehand. We just need to find out how does the output score changes with respect 33 | to each input pixel.This gives us an understanding of which parts of image are most discriminative. 34 | 35 | Original image- 36 | 37 | ![original](https://user-images.githubusercontent.com/20341653/43388907-04452e3e-9408-11e8-8f75-ff3c12515dda.png) 38 | 39 | Saliency map- 40 | 41 | ![saliency_map](https://user-images.githubusercontent.com/20341653/43389128-91137f46-9408-11e8-91b1-c4f84a6a33d7.png) 42 | 43 | Saliency map after smoothing- 44 | 45 | ![saliency_map](https://user-images.githubusercontent.com/20341653/43389128-91137f46-9408-11e8-91b1-c4f84a6a33d7.png) 46 | 47 | 48 | -------------------------------------------------------------------------------- /Visualizing convolutional input/ostrich.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/saketd403/Visualising-Image-Classification-Models-and-Saliency-Maps/ed544d4a8aedab9c9a5a6521785c2aeabefb734f/Visualizing convolutional input/ostrich.png -------------------------------------------------------------------------------- /Visualizing saliency maps/original.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/saketd403/Visualising-Image-Classification-Models-and-Saliency-Maps/ed544d4a8aedab9c9a5a6521785c2aeabefb734f/Visualizing saliency maps/original.png -------------------------------------------------------------------------------- /Visualizing saliency maps/saliency_map.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/saketd403/Visualising-Image-Classification-Models-and-Saliency-Maps/ed544d4a8aedab9c9a5a6521785c2aeabefb734f/Visualizing saliency maps/saliency_map.png -------------------------------------------------------------------------------- /Visualizing saliency maps/saliency_map_filter.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/saketd403/Visualising-Image-Classification-Models-and-Saliency-Maps/ed544d4a8aedab9c9a5a6521785c2aeabefb734f/Visualizing saliency maps/saliency_map_filter.png -------------------------------------------------------------------------------- /Visualizing saliency maps/saliency_map_gray.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/saketd403/Visualising-Image-Classification-Models-and-Saliency-Maps/ed544d4a8aedab9c9a5a6521785c2aeabefb734f/Visualizing saliency maps/saliency_map_gray.png -------------------------------------------------------------------------------- /Visualizing saliency maps/saliency_map_smooth.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/saketd403/Visualising-Image-Classification-Models-and-Saliency-Maps/ed544d4a8aedab9c9a5a6521785c2aeabefb734f/Visualizing saliency maps/saliency_map_smooth.png -------------------------------------------------------------------------------- /Visualizing saliency maps/saliency_map_smooth_filter.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/saketd403/Visualising-Image-Classification-Models-and-Saliency-Maps/ed544d4a8aedab9c9a5a6521785c2aeabefb734f/Visualizing saliency maps/saliency_map_smooth_filter.png --------------------------------------------------------------------------------