Source: vis/losses.py#L0


Loss

Abstract class for defining the loss function to be minimized. The loss function should be built by defining build_loss function.

The attribute name should be defined to identify loss function with verbose outputs. Defaults to 'Unnamed Loss' if not overridden.


Loss.__init__

__init__(self)

Loss.build_loss

build_loss(self)

Implement this function to build the loss function expression. Any additional arguments required to build this loss function may be passed in via __init__.

Ideally, the function expression must be compatible with all keras backends and channels_first or channels_last image_data_format(s). utils.slicer can be used to define data format agnostic slices. (just define it in channels_first format, it will automatically shuffle indices for tensorflow which uses channels_last format).

# theano slice
conv_layer[:, filter_idx, ...]

# TF slice
conv_layer[..., filter_idx]

# Backend agnostic slice
conv_layer[utils.slicer[:, filter_idx, ...]]

utils.get_img_shape is another optional utility that make this easier.

Returns:

The loss expression.


ActivationMaximization

A loss function that maximizes the activation of a set of filters within a particular layer.

Typically this loss is used to ask the reverse question - What kind of input image would increase the networks confidence, for say, dog class. This helps determine what the network might be internalizing as being the 'dog' image space.

One might also use this to generate an input image that maximizes both 'dog' and 'human' outputs on the final keras.layers.Dense layer.


ActivationMaximization.__init__

__init__(self, layer, filter_indices)

Args:

  • layer: The keras layer whose filters need to be maximized. This can either be a convolutional layer or a dense layer.
  • filter_indices: filter indices within the layer to be maximized. For keras.layers.Dense layer, filter_idx is interpreted as the output index.

If you are optimizing final keras.layers.Dense layer to maximize class output, you tend to get better results with 'linear' activation as opposed to 'softmax'. This is because 'softmax' output can be maximized by minimizing scores for other classes.


ActivationMaximization.build_loss

build_loss(self)