Source: vis/optimizer.py#L0
Optimizer
Optimizer.__init__
__init__(self, input_tensor, losses, input_range=(0, 255), wrt_tensor=None, norm_grads=True)
Creates an optimizer that minimizes weighted loss function.
Args:
- input_tensor: An input tensor of shape:
(samples, channels, image_dims...)
ifimage_data_format= channels_first
or(samples, image_dims..., channels)
ifimage_data_format=channels_last
. - losses: List of (Loss, weight) tuples.
- input_range: Specifies the input range as a
(min, max)
tuple. This is used to rescale the final optimized input to the given range. (Default value=(0, 255)) - wrt_tensor: Short for, with respect to. This instructs the optimizer that the aggregate loss from
losses
should be minimized with respect towrt_tensor
.wrt_tensor
can be any tensor that is part of the model graph. Default value is set to None which means that loss will simply be minimized with respect toinput_tensor
. - norm_grads: True to normalize gradients. Normalization avoids very small or large gradients and ensures a smooth gradient gradient descent process. If you want the actual gradient (for example, visualizing attention), set this to false.
Optimizer.minimize
minimize(self, seed_input=None, max_iter=200, input_modifiers=None, grad_modifier=None, \
callbacks=None, verbose=True)
Performs gradient descent on the input image with respect to defined losses.
Args:
- seed_input: An N-dim numpy array of shape:
(samples, channels, image_dims...)
ifimage_data_format= channels_first
or(samples, image_dims..., channels)
ifimage_data_format=channels_last
. Seeded with random noise if set to None. (Default value = None) - max_iter: The maximum number of gradient descent iterations. (Default value = 200)
- input_modifiers: A list of InputModifier instances specifying
how to make
pre
andpost
changes to the optimized input during the optimization process.pre
is applied in list order whilepost
is applied in reverse order. For example,input_modifiers = [f, g]
means thatpre_input = g(f(inp))
andpost_input = f(g(inp))
- grad_modifier: gradient modifier to use. See grad_modifiers. If you don't specify anything, gradients are unchanged. (Default value = None)
- callbacks: A list of OptimizerCallback instances to trigger.
- verbose: Logs individual losses at the end of every gradient descent iteration. Very useful to estimate loss weight factor(s). (Default value = True)
Returns:
The tuple of (optimized input, grads with respect to wrt, wrt_value)
after gradient descent iterations.