Source: vis/visualization/init.py#L0


visualize_activation_with_losses

visualize_activation_with_losses(input_tensor, losses, wrt_tensor=None, seed_input=None, \
    input_range=(0, 255), **optimizer_params)

Generates the input_tensor that minimizes the weighted losses. This function is intended for advanced use cases where a custom loss is desired.

Args:

  • input_tensor: An input tensor of shape: (samples, channels, image_dims...) if image_data_format= channels_first or (samples, image_dims..., channels) if image_data_format=channels_last.
  • wrt_tensor: Short for, with respect to. The gradients of losses are computed with respect to this tensor. When None, this is assumed to be the same as input_tensor (Default value: None)
  • losses: List of (Loss, weight) tuples.
  • seed_input: Seeds the optimization with a starting image. Initialized with a random value when set to None. (Default value = None)
  • input_range: Specifies the input range as a (min, max) tuple. This is used to rescale the final optimized input to the given range. (Default value=(0, 255))
  • optimizer_params: The **kwargs for optimizer params. Will default to reasonable values when required keys are not found.

Returns:

The model input that minimizes the weighted losses.


get_num_filters

get_num_filters(layer)

Determines the number of filters within the given layer.

Args:

  • layer: The keras layer to use.

Returns:

Total number of filters within layer. For keras.layers.Dense layer, this is the total number of outputs.


overlay

overlay(array1, array2, alpha=0.5)

Overlays array1 onto array2 with alpha blending.

Args:

  • array1: The first numpy array.
  • array2: The second numpy array.
  • alpha: The alpha value of array1 as overlayed onto array2. This value needs to be between [0, 1], with 0 being array2 only to 1 being array1 only (Default value = 0.5).

Returns:

The array1, overlayed with array2 using alpha blending.


visualize_saliency_with_losses

visualize_saliency_with_losses(input_tensor, losses, seed_input, wrt_tensor=None, \
    grad_modifier="absolute", keepdims=False)

Generates an attention heatmap over the seed_input by using positive gradients of input_tensor with respect to weighted losses.

This function is intended for advanced use cases where a custom loss is desired. For common use cases, refer to visualize_class_saliency or visualize_regression_saliency.

For a full description of saliency, see the paper: [Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps] (https://arxiv.org/pdf/1312.6034v2.pdf)

Args:

  • input_tensor: An input tensor of shape: (samples, channels, image_dims...) if image_data_format= channels_first or (samples, image_dims..., channels) if image_data_format=channels_last.
  • losses: List of (Loss, weight) tuples.
  • seed_input: The model input for which activation map needs to be visualized.
  • wrt_tensor: Short for, with respect to. The gradients of losses are computed with respect to this tensor. When None, this is assumed to be the same as input_tensor (Default value: None)
  • grad_modifier: gradient modifier to use. See grad_modifiers. By default absolute value of gradients are used. To visualize positive or negative gradients, use relu and negate respectively. (Default value = 'absolute')
  • keepdims: A boolean, whether to keep the dimensions or not. If keepdims is False, the channels axis is deleted. If keepdims is True, the grad with same shape as input_tensor is returned. (Default value: False)

Returns:

The normalized gradients of seed_input with respect to weighted losses.


visualize_activation

visualize_activation(model, layer_idx, filter_indices=None, wrt_tensor=None, seed_input=None, \
    input_range=(0, 255), backprop_modifier=None, grad_modifier=None, act_max_weight=1, \
    lp_norm_weight=10, tv_weight=10, **optimizer_params)

Generates the model input that maximizes the output of all filter_indices in the given layer_idx.

Args:

  • model: The keras.models.Model instance. The model input shape must be: (samples, channels, image_dims...) if image_data_format=channels_first or (samples, image_dims..., channels) if image_data_format=channels_last.
  • layer_idx: The layer index within model.layers whose filters needs to be visualized.
  • filter_indices: filter indices within the layer to be maximized. If None, all filters are visualized. (Default value = None) For keras.layers.Dense layer, filter_idx is interpreted as the output index. If you are visualizing final keras.layers.Dense layer, consider switching 'softmax' activation for 'linear' using utils.apply_modifications for better results.
  • wrt_tensor: Short for, with respect to. The gradients of losses are computed with respect to this tensor. When None, this is assumed to be the same as input_tensor (Default value: None)
  • seed_input: Seeds the optimization with a starting input. Initialized with a random value when set to None. (Default value = None)
  • input_range: Specifies the input range as a (min, max) tuple. This is used to rescale the final optimized input to the given range. (Default value=(0, 255))
  • backprop_modifier: backprop modifier to use. See backprop_modifiers. If you don't specify anything, no backprop modification is applied. (Default value = None)
  • grad_modifier: gradient modifier to use. See grad_modifiers. If you don't specify anything, gradients are unchanged (Default value = None)
  • act_max_weight: The weight param for ActivationMaximization loss. Not used if 0 or None. (Default value = 1)
  • lp_norm_weight: The weight param for LPNorm regularization loss. Not used if 0 or None. (Default value = 10)
  • tv_weight: The weight param for TotalVariation regularization loss. Not used if 0 or None. (Default value = 10)
  • optimizer_params: The **kwargs for optimizer params. Will default to reasonable values when required keys are not found.

Example:

If you wanted to visualize the input image that would maximize the output index 22, say on final keras.layers.Dense layer, then, filter_indices = [22], layer_idx = dense_layer_idx.

If filter_indices = [22, 23], then it should generate an input image that shows features of both classes.

Returns:

The model input that maximizes the output of filter_indices in the given layer_idx.


visualize_saliency

visualize_saliency(model, layer_idx, filter_indices, seed_input, wrt_tensor=None, \
    backprop_modifier=None, grad_modifier="absolute", keepdims=False)

Generates an attention heatmap over the seed_input for maximizing filter_indices output in the given layer_idx.

Args:

  • model: The keras.models.Model instance. The model input shape must be: (samples, channels, image_dims...) if image_data_format=channels_first or (samples, image_dims..., channels) if image_data_format=channels_last.
  • layer_idx: The layer index within model.layers whose filters needs to be visualized.
  • filter_indices: filter indices within the layer to be maximized. If None, all filters are visualized. (Default value = None) For keras.layers.Dense layer, filter_idx is interpreted as the output index. If you are visualizing final keras.layers.Dense layer, consider switching 'softmax' activation for 'linear' using utils.apply_modifications for better results.
  • seed_input: The model input for which activation map needs to be visualized.
  • wrt_tensor: Short for, with respect to. The gradients of losses are computed with respect to this tensor. When None, this is assumed to be the same as input_tensor (Default value: None)
  • backprop_modifier: backprop modifier to use. See backprop_modifiers. If you don't specify anything, no backprop modification is applied. (Default value = None)
  • grad_modifier: gradient modifier to use. See grad_modifiers. By default absolute value of gradients are used. To visualize positive or negative gradients, use relu and negate respectively. (Default value = 'absolute')
  • keepdims: A boolean, whether to keep the dimensions or not. If keepdims is False, the channels axis is deleted. If keepdims is True, the grad with same shape as input_tensor is returned. (Default value: False)

Example:

If you wanted to visualize attention over 'bird' category, say output index 22 on the final keras.layers.Dense layer, then, filter_indices = [22], layer = dense_layer.

One could also set filter indices to more than one value. For example, filter_indices = [22, 23] should (hopefully) show attention map that corresponds to both 22, 23 output categories.

Returns:

The heatmap image indicating the seed_input regions whose change would most contribute towards maximizing the output of filter_indices.


visualize_cam_with_losses

visualize_cam_with_losses(input_tensor, losses, seed_input, penultimate_layer, grad_modifier=None)

Generates a gradient based class activation map (CAM) by using positive gradients of input_tensor with respect to weighted losses.

For details on grad-CAM, see the paper: [Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization] (https://arxiv.org/pdf/1610.02391v1.pdf).

Unlike class activation mapping, which requires minor changes to network architecture in some instances, grad-CAM has a more general applicability.

Compared to saliency maps, grad-CAM is class discriminative; i.e., the 'cat' explanation exclusively highlights cat regions and not the 'dog' region and vice-versa.

Args:

  • input_tensor: An input tensor of shape: (samples, channels, image_dims...) if image_data_format= channels_first or (samples, image_dims..., channels) if image_data_format=channels_last.
  • losses: List of (Loss, weight) tuples.
  • seed_input: The model input for which activation map needs to be visualized.
  • penultimate_layer: The pre-layer to layer_idx whose feature maps should be used to compute gradients with respect to filter output.
  • grad_modifier: gradient modifier to use. See grad_modifiers. If you don't specify anything, gradients are unchanged (Default value = None)

Returns:

The normalized gradients of seed_input with respect to weighted losses.


visualize_cam

visualize_cam(model, layer_idx, filter_indices, seed_input, penultimate_layer_idx=None, \
    backprop_modifier=None, grad_modifier=None)

Generates a gradient based class activation map (grad-CAM) that maximizes the outputs of filter_indices in layer_idx.

Args:

  • model: The keras.models.Model instance. The model input shape must be: (samples, channels, image_dims...) if image_data_format=channels_first or (samples, image_dims..., channels) if image_data_format=channels_last.
  • layer_idx: The layer index within model.layers whose filters needs to be visualized.
  • filter_indices: filter indices within the layer to be maximized. If None, all filters are visualized. (Default value = None) For keras.layers.Dense layer, filter_idx is interpreted as the output index. If you are visualizing final keras.layers.Dense layer, consider switching 'softmax' activation for 'linear' using utils.apply_modifications for better results.
  • seed_input: The input image for which activation map needs to be visualized.
  • penultimate_layer_idx: The pre-layer to layer_idx whose feature maps should be used to compute gradients wrt filter output. If not provided, it is set to the nearest penultimate Conv or Pooling layer.
  • backprop_modifier: backprop modifier to use. See backprop_modifiers. If you don't specify anything, no backprop modification is applied. (Default value = None)
  • grad_modifier: gradient modifier to use. See grad_modifiers. If you don't specify anything, gradients are unchanged (Default value = None)

Example:

If you wanted to visualize attention over 'bird' category, say output index 22 on the final keras.layers.Dense layer, then, filter_indices = [22], layer = dense_layer.

One could also set filter indices to more than one value. For example, filter_indices = [22, 23] should (hopefully) show attention map that corresponds to both 22, 23 output categories.

Returns:

The heatmap image indicating the input regions whose change would most contribute towards maximizing the output of filter_indices.