Source: keras_text/models/sequence_encoders.py#L0


SequenceEncoderBase


SequenceEncoderBase.__init__

__init__(self, dropout_rate=0.5)

Creates a new instance of sequence encoder.

Args:

  • dropout_rate: The final encoded output dropout.

SequenceEncoderBase.build_model

build_model(self, x)

Build your model graph here.

Args:

  • x: The encoded or embedded input sequence.

Returns:

The model output tensor without the classification block.


SequenceEncoderBase.requires_padding

requires_padding(self)

Return a boolean indicating whether this model expects inputs to be padded or not.


YoonKimCNN


YoonKimCNN.__init__

__init__(self, num_filters=64, filter_sizes=[3, 4, 5], dropout_rate=0.5, **conv_kwargs)

Yoon Kim's shallow cnn model: https://arxiv.org/pdf/1408.5882.pdf

Args:

  • num_filters: The number of filters to use per filter_size. (Default value = 64)
  • filter_sizes: The filter sizes for each convolutional layer. (Default value = [3, 4, 5]) **cnn_kwargs: Additional args for building the Conv1D layer.

YoonKimCNN.build_model

build_model(self, x)

YoonKimCNN.requires_padding

requires_padding(self)

StackedRNN


StackedRNN.__init__

__init__(self, rnn_class=<class 'keras.layers.recurrent.GRU'>, hidden_dims=[50, 50], \
    bidirectional=True, dropout_rate=0.5, **rnn_kwargs)

Creates a stacked RNN.

Args:

  • rnn_class: The type of RNN to use.
  • hidden_dims: The hidden dims for corresponding stacks of RNNs.
  • bidirectional: Whether to use bidirectional encoding. **rnn_kwargs: Additional args for building the RNN.

StackedRNN.build_model

build_model(self, x)

StackedRNN.requires_padding

requires_padding(self)

AttentionRNN


AttentionRNN.__init__

__init__(self, rnn_class=<class 'keras.layers.recurrent.GRU'>, encoder_dims=50, \
    bidirectional=True, dropout_rate=0.5, **rnn_kwargs)

Creates an RNN model with attention. The attention mechanism is implemented as described in https://www.cs.cmu.edu/~hovy/papers/16HLT-hierarchical-attention-networks.pdf, but without sentence level attention.

Args:

  • rnn_class: The type of RNN to use.
  • encoder_dims: The number of hidden units of RNN.
  • bidirectional: Whether to use bidirectional encoding. **rnn_kwargs: Additional args for building the RNN.

AttentionRNN.build_model

build_model(self, x)

AttentionRNN.get_attention_tensor

get_attention_tensor(self)

AttentionRNN.requires_padding

requires_padding(self)

AveragingEncoder


AveragingEncoder.__init__

__init__(self, dropout_rate=0)

An encoder that averages sequence inputs.


AveragingEncoder.build_model

build_model(self, x)

AveragingEncoder.requires_padding

requires_padding(self)