Source: vis/regularizers.py#L0
normalize
normalize(input_tensor, output_tensor)
Normalizes the output_tensor
with respect to input_tensor
dimensions.
This makes regularizer weight factor more or less uniform across various input image dimensions.
Args:
- input_tensor: An tensor of shape:
(samples, channels, image_dims...)
ifimage_data_format= channels_first
or(samples, image_dims..., channels)
ifimage_data_format=channels_last
. - output_tensor: The tensor to normalize.
Returns:
The normalized tensor.
TotalVariation
TotalVariation.__init__
__init__(self, img_input, beta=2.0)
Total variation regularizer encourages blobbier and coherent image structures, akin to natural images.
See section 3.2.2
in
Visualizing deep convolutional neural networks using natural pre-images
for details.
Args:
- img_input: An image tensor of shape:
(samples, channels, image_dims...)
ifimage_data_format=
channels_firstor
(samples, image_dims..., channels)if
image_data_format=channels_last`. - beta: Smaller values of beta give sharper but 'spikier' images. Values are recommended as a reasonable compromise. (Default value = 2.)
TotalVariation.build_loss
build_loss(self)
Implements the N-dim version of function to return total variation for all images in the batch.
LPNorm
LPNorm.__init__
__init__(self, img_input, p=6.0)
Builds a L-p norm function. This regularizer encourages the intensity of pixels to stay bounded. i.e., prevents pixels from taking on very large values.
Args:
- img_input: 4D image input tensor to the model of shape:
(samples, channels, rows, cols)
if data_format='channels_first' or(samples, rows, cols, channels)
if data_format='channels_last'. - p: The pth norm to use. If p = float('inf'), infinity-norm will be used.
LPNorm.build_loss
build_loss(self)