Basic CAM and Grad-CAM Implementation on MNIST

Made to go along with the Grad-CAM and Basic CNN Interpretability blog post.

CAM

Note how the model has a very specific architecture, namely img → conv → max pooling → conv → max pooling → global average pooling → fully connected

Official CAM Implementation

Grad-CAM and "Guided" Grad-CAM

Note how now the model doesn't need that global average pooling layer.

Warning 1: This is not the same implementation as found in the paper. In the paper they said that the $\alpha$ term is calculated with taking the graident of $y^c$ before the softmax w.r.t each value in the output of the final conv layer. I found that using $y^c$ after the softmax worked better. I believe this is because when calculating the gradient backwards I am setting the output to 0 for all classes and 1 for the target class. However, without the softmax, the outputs of the model are very far away from 0 and 1 for the other and target classes. I think (I am not sure) that this results in the final Grad-CAM output having most values that are 0 or less as the typical output for the target class is much higher than 1 and for the other classes is much less than 0 so the gradients are negative. Then when applying the ReLU to the Grad-CAM some images were just totally 0. But with the softmax applied the final outputs look much better. Perhaps some sort of normalization within the network could help.

Warning 2: I'm not actually implementing guided backprop/guided Grad-CAM here. I am just computing the gradients of the inputs w.r.t. the output predictions, I am not applying a ReLU to the gradients as they are propagating backwards which is what guided backprop does. I found that for this simple MNIST example it is sufficient not to use strict guided backprop.

If you want to see other implementations that helped me look here: