Explicating Neural Networks

The usual problem with neural nets is, once you have trained them, you have no actual idea why they come to the decisions that they do. One classic example was a net that was being trained to distinguish between pictures of huskies and wolves, where it turned out it was actually assigning the highest weight to the area of snow in the background of the pictures. Enter LIME, which is a Python-based toolkit that tries to analyze the trained net to show you how each part of the dataset is being interpreted <https://www.theregister.co.uk/2018/11/05/hands_on_with_lime/>. The article shows example pictures where areas are colour-coded green or red to indicate whether they make a positive or negative contribution to a particular classification decision. This way, you can spot if any parts that should be ignored are being given an anomalously high weight. Or vice versa.
participants (1)
-
Lawrence D'Oliveiro