Abstract class for defining the loss function to be minimized.
The loss function should be built by defining
name should be defined to identify loss function with verbose outputs.
Defaults to 'Unnamed Loss' if not overridden.
Implement this function to build the loss function expression.
Any additional arguments required to build this loss function may be passed in via
Ideally, the function expression must be compatible with all keras backends and
utils.slicer can be used to define data format agnostic slices.
(just define it in
channels_first format, it will automatically shuffle indices for tensorflow
# theano slice conv_layer[:, filter_idx, ...] # TF slice conv_layer[..., filter_idx] # Backend agnostic slice conv_layer[utils.slicer[:, filter_idx, ...]]
utils.get_img_shape is another optional utility that make this easier.
The loss expression.
A loss function that maximizes the activation of a set of filters within a particular layer.
Typically this loss is used to ask the reverse question - What kind of input image would increase the networks confidence, for say, dog class. This helps determine what the network might be internalizing as being the 'dog' image space.
One might also use this to generate an input image that maximizes both 'dog' and 'human' outputs on the final
__init__(self, layer, filter_indices)
- layer: The keras layer whose filters need to be maximized. This can either be a convolutional layer or a dense layer.
- filter_indices: filter indices within the layer to be maximized.
filter_idxis interpreted as the output index.
If you are optimizing final
keras.layers.Dense layer to maximize class output, you tend to get
better results with 'linear' activation as opposed to 'softmax'. This is because 'softmax'
output can be maximized by minimizing scores for other classes.