Start United States USA — software Visualizing What a Neural Network Thinks

Visualizing What a Neural Network Thinks

104
0
TEILEN

We will see how to visualize saliency maps and get an idea of what a neural network considers important in an image. If you feed a Convolutional Neural …
Join the DZone community and get the full member experience. Ok, Neural Networks don’t really „think“, but they surely have an opinion. If you feed a Convolutional Neural Network, let’s say, a classifier with an image, it will tell you what it „thinks“ is there, but sometimes you might wonder what contributed to that particular decision, as an attempt to try to „debug“ your network. For example, when designing a gender classification Neural Network, I noticed that the network was paying attention to the ears, and it learned to recognize them. It turned out ears can be used to detect gender, so it was a good sign that my network was looking for them. In the rest of the article, we will see how to use saliency maps to get an idea of what a Neural Network considers important. The following content is mostly an extract from the book Hands-On Vision and Behavior for Self-Driving Cars that I wrote for Packt Publishing, with the help of Krishtof Korda; in the previous part of the chapter, we trained a Neural Network to learn how to self-drive a car in the simulator Carla, and the model of the network is stored in a variable called model. To understand what the neural network is focusing its attention on, we should use a practical example, so let’s choose an image: Test image If we had to drive on this road, as humans, we would pay attention to the lanes and the wall, though admittedly, the wall is not as important as the last lane is before that. We already know how to get an idea of what a CNN (short for Convolutional Neural Network) such as DAVE-2 is taking into consideration: as the output of a convolution layer is an image, we can visualize it as follows: Part of the activations of the first convolutional layer This is a good starting point, but we would like something more. We would like to understand which pixels contribute the most to the prediction. For that, we need to get a saliency map. Keras does not directly support them, but we can use keras-vis.

Continue reading...