CS seminar series presents
Plotting feature dependence in black box models
by Jan Mrkos
Thursday, April 28 at 14:00 in 205
Machine learning models such as neural networks and random forests are often viewed as a "black box". This is because the feature space above which they learn the decision function is too complex to comprehend as a whole. Yet when needed, it is possible to visualize the effect of small feature subsets on the model response. However, this requires us to somehow simplify, aggregate or discard the influence of the other features. Techniques to do this include partial dependence plots (PDP), probability cuts or individual conditional expectation (ICE) plots. We propose a simple extension to these techniques that visualizes partial effects of different features on model response to the data in a novel way.