deepr.layers package
Submodules
deepr.layers.base module
Interface for Layers
- class deepr.layers.base.Lambda(fn, **kwargs)[source]
Bases:
LayerLambda layer.
Example
>>> from deepr.layers import Lambda >>> add_one = Lambda(lambda tensors, _: tensors + 1, inputs="x", outputs="y") >>> add_one(1) 2 >>> add_one({"x": 1}) {'y': 2}
- forward(tensors, mode=None)[source]
Forward method on one Tensor or a tuple of Tensors.
- Parameters:
tensors (Union[tf.Tensor, Tuple[tf.Tensor, ...]]) –
n_in = 1: one tensor (NOT wrapped in a tuple)
n_in > 1: a tuple of tensors
mode (str, optional) – Description
- Returns:
n_out = 1: one tensor (NOT wrapped in a tuple)
n_out > 1: a tuple of tensors
- Return type:
Union[tf.Tensor, Tuple[tf.Tensor, …]]
- class deepr.layers.base.Layer(n_in=None, n_out=None, inputs=None, outputs=None, name=None)[source]
Bases:
ABCBase class for composable layers in a deep learning network.
Heavily inspired by TRAX layers, adapted for TF1.X and tf.estimator.
Layers are the basic building block of models. A
Layeris a function from one or more inputs to one or more outputs.- The inputs of a
Layerare tensors, packaged as follows n_in = 1: one tensor (NOT wrapped in a tuple)
n_in > 1: a tuple of tensors
- The outputs of a
Layerare tensors, packaged as follows n_out = 1: one tensor (NOT wrapped in a tuple)
n_out > 1: a tuple of tensors
The basic usage of a
Layeris to build graphs as intuitively as possible. For example:>>> from deepr.layers import Dense >>> input_tensor = tf.ones([32, 8]) >>> dense = Dense(16) >>> output_tensor = dense(input_tensor) >>> output_tensor <tf.Tensor 'dense/BiasAdd:0' shape=(32, 16) dtype=float32>
Because some layers (like
Dropout) might behave differently depending on the mode (TRAIN, EVAL, PREDICT), an optional argument can be provided:>>> from deepr.layers import Dropout >>> tensor = tf.ones([32, 8]) >>> dropout = Dropout(0.5) >>> dropped = dropout(input_tensor, tf.estimator.ModeKeys.TRAIN) >>> not_dropped = dropout(input_tensor, tf.estimator.ModeKeys.EVAL)
Because in a lot of cases, a
Layerneeds to be applied on a dictionary, yielded by a tf.data.Dataset for example, you can also do:>>> tf.reset_default_graph() >>> tensors = {"x": tf.ones([32, 8])} >>> dense = Dense(16, inputs="x", outputs="y") >>> tensors = dense(tensors) >>> tensors {'y': <tf.Tensor 'dense/BiasAdd:0' shape=(32, 16) dtype=float32>}
The inputs and outputs are optional (defaults to t_0, t_1 etc.) and their order needs to be coherent with the order of tensors in tuples.
Authors of new layer subclasses typically override one of the two methods of the base
Layerclass:def forward(self, tensors, mode: str = None): # tensors is either a Tensor (n_in=1) or a tuple of Tensors def forward_as_dict(self, tensors: Dict, mode: str = None) -> Dict: # tensors is a dictionary whose keys contain self.inputs
The implementation of either of these two methods gives the implementation of the other for free thanks to automatic tuple to dictionary conversion.
The easiest way to define custom layers is to use the
layerdecorator (see documentation).Note that layers using parameters (a
Denselayer for example) should not create variables at instantiation time nor store variables or any other graph references as attributes.>>> tf.reset_default_graph() >>> dense = Dense(16)
No parameters are created >>> dense(tf.ones([32, 8])) <tf.Tensor ‘dense/BiasAdd:0’ shape=(32, 16) dtype=float32>
Parameters are created in the current tf.Graph
In other words, calling the layer should not change its state. This is effectively enforcing functional programming. The state of the layer is only used to parametrize its runtime. This makes it simpler to define graphs with the tf.estimator API.
If you want to define a layer and use it twice (effectively reusing its variables), you need to be explicit, and set the reuse=True arguments at call time. Behind the scene, it’s simply wrapping the TF1.X variable management into a
variable_scope().>>> tf.reset_default_graph() >>> dense = Dense(16) >>> dense(tf.ones([32, 8])) <tf.Tensor 'dense/BiasAdd:0' shape=(32, 16) dtype=float32> >>> dense(tf.ones([32, 8]), reuse=True) <tf.Tensor 'dense_1/BiasAdd:0' shape=(32, 16) dtype=float32>
While the two operations have different names ‘dense/BiasAdd:0’ and ‘dense_1/BiasAdd:0’, they both share the same weights.
Good examples on how to implement parametrized layers are deepr.Dense and embedding.Embedding.
- inputs
Names of the n_in inputs keys in a dictionary. Tuple if n_in > 1, else string.
- outputs
Names of the n_out outputs keys in a dictionary. Tuple if n_out > 1, else string
- forward(tensors, mode=None)[source]
Forward method on one Tensor or a tuple of Tensors.
- Parameters:
tensors (Union[tf.Tensor, Tuple[tf.Tensor, ...]]) –
n_in = 1: one tensor (NOT wrapped in a tuple)
n_in > 1: a tuple of tensors
mode (str, optional) – Description
- Returns:
n_out = 1: one tensor (NOT wrapped in a tuple)
n_out > 1: a tuple of tensors
- Return type:
Union[tf.Tensor, Tuple[tf.Tensor, …]]
- forward_as_dict(tensors, mode=None)[source]
Forward method on a dictionary of Tensors.
The input
tensorsshould contain all keys defined inself.inputs(but might contain more keys). It returns a new dictionary (does not mutate the inputtensorsdictionary in-place), whose keys are exactlyself.outputs.
- The inputs of a
- deepr.layers.base.layer(fn=None, n_in=None, n_out=None, inputs=None, outputs=None)[source]
Decorator that creates a layer constructor from a function.
The decorator returns a subclass of
Layerwhoseforwardmethod is defined by the decorated function.For example
>>> from deepr.layers import layer >>> @layer(n_in=1, n_out=1) ... def AddOffset(tensors, mode, offset): ... return tensors + offset >>> add = AddOffset(offset=1) >>> add(1) 2
The class created by the decorator is roughly equivalent to
class AddOffset(Layer): def __init__(self, offset, n_in=1, n_out=1, inputs=None, outputs=None, name=None): Layer.__init__(n_in=n_in, n_out=n_out, inputs=inputs, outputs=outputs, name=name) self.offset = offset def forward(self, tensors, mode: str = None): return tensors + self.offset
You can also add a ‘mode’ argument to your layer like so >>> @layer(n_in=1, n_out=1) … def AddOffsetInTrain(tensors, mode, offset): … if mode == tf.estimator.ModeKeys.TRAIN: … return tensors + offset … else: … return tensors >>> add = AddOffsetInTrain(offset=1) >>> add(1, tf.estimator.ModeKeys.TRAIN) 2 >>> add(1, tf.estimator.ModeKeys.PREDICT) 1
Note that ‘tensors’ and ‘mode’ need to be the the first arguments of the function IN THIS ORDER.
deepr.layers.bpr module
BPR Loss Layer
- class deepr.layers.bpr.BPR(**kwargs)[source]
Bases:
LayerVanilla BPR Loss Layer.
Expected value at beginning of training : -log(0.5) = 0.69
- forward(tensors, mode=None)[source]
Forward method of the layer (details: https://arxiv.org/pdf/1205.2618.pdf)
- Parameters:
tensors (Tuple[tf.Tensor]) –
positives : shape = (batch, num_events)
negatives : shape = (batch, num_events, num_negatives)
- Returns:
BPR loss
- Return type:
tf.Tensor
- class deepr.layers.bpr.MaskedBPR(**kwargs)[source]
Bases:
LayerMasked BPR Loss Layer.
Expected value at beginning of training : -log(0.5) = 0.69
- forward(tensors, mode=None)[source]
Forward method of the layer
- Parameters:
tensors (Tuple[tf.Tensor]) –
positives : shape = (batch, num_events)
negatives : shape = (batch, num_events, num_negatives)
mask : shape = (batch, num_events, num_negatives)
weights : shape = (batch, num_events)
- Returns:
BPR loss
- Return type:
tf.Tensor
deepr.layers.bpr_max module
BPT Max Loss Layer
- class deepr.layers.bpr_max.BPRMax(bpr_max_regularizer=0.0, **kwargs)[source]
Bases:
LayerVanilla BPR Max Loss Layer
- class deepr.layers.bpr_max.MaskedBPRMax(bpr_max_regularizer=0.0, **kwargs)[source]
Bases:
LayerMasked BPR Max Loss Layer
- forward(tensors, mode=None)[source]
Forward method of the layer (details: https://arxiv.org/pdf/1706.03847.pdf)
- Parameters:
tensors (Tuple[tf.Tensor]) –
positives : shape = (batch, num_events)
negatives : shape = (batch, num_events, num_negatives)
mask : shape = (batch, num_events, num_negatives)
weights : shape = (batch, num_events)
- Returns:
BPR Max loss
- Return type:
tf.Tensor
deepr.layers.click_rank module
Rank Layer
deepr.layers.combinators module
Combinators layers
- class deepr.layers.combinators.ActiveMode(layer, mode=None, inputs=None, outputs=None)[source]
Bases:
LayerActive Mode Layer.
- class deepr.layers.combinators.DAG(*layers)[source]
Bases:
LayerClass to easily compose layers in a deep learning network.
A Deep Learning Network is a Directed Acyclic Graph (DAG) of layers. The easiest way to define a DAG is by stacking layers on top of each others. For example:
@deepr.layers.layer(n_in=1, n_out=1) def OffsetLayer(tensors, mode, offset): return tensors + offset layer = deepr.layers.DAG( OffsetLayer(offset=1, inputs="x"), OffsetLayer(offset=2, outputs="y") ) layer(1) # (1 + 1) + 2 = 4 layer({"x": 1}) # {"y": 4}
Because in some cases your model is more complicated (branches etc.) you can exploit the inputs / outputs naming capability of the base
Layerclass. For example:@deepr.layers.layer(n_in=2, n_out=1) def Add(tensors, mode): x, y = tensors return x + y layer = deepr.layers.DAG( OffsetLayer(offset=2, inputs="x", outputs="y"), OffsetLayer(offset=2, inputs="x", outputs="z"), Add(inputs="y, z", outputs="total"), ) layer(1) # (1 + 2) + (1 + 2) = 6 layer({"x": 1}) # {"total": 6}
As always, the resulting layer can be operated on Tensors or dictionaries of Tensors. The inputs / outputs of the
DAGlayer corresponds to the inputs of the first layer and the outputs of the last layer in the stack (intermediary nodes that are not returned by the last layer will not be returned).An easy way to define arbitrary inputs / outputs nodes is to use the
Selectclass. For example:layer = deepr.layers.DAG( deepr.layers.Select("x1, x2"), OffsetLayer(offset=2, inputs="x1", outputs="y1"), OffsetLayer(offset=2, inputs="x2", outputs="y2"), Add(inputs="y1, y2", outputs="y3"), deepr.layers.Select("y1, y2, y3"), ) layer((1, 2)) # (3, 4, 7) layer({"x1": 1, "x2": 2}) # {"y1": 3, "y2": 4, "y3": 7}
Note that default naming still applies, so it won’t raise an error if you try stacking layers with incoherent shapes, as long as the correctly named nodes are defined.
layer = deepr.layers.DAG( deepr.layers.Select(n_in=2), # Defines "t_0" and "t_1" nodes OffsetLayer(offset=2), # Replace "t_0" <- "t_0" + 2 Add(), # Returns "t_0" + "t_1" ) result = layer((tf.constant(2), tf.constant(2))) with tf.Session() as sess: assert sess.run(result) == 6
- class deepr.layers.combinators.Parallel(*layers)[source]
Bases:
LayerApply layers in parallel on consecutive inputs.
If you have 2 layers F(a, b) -> x and G(c) -> (y, z), it defines a layer H(a, b, c) -> (x, y, z). For example:
layer1 = Add(inputs="x1, x2", outputs="y1") layer2 = OffsetLayer(offset=1, inputs="x3", outputs="y2") layer = deepr.layers.Parallel(layer1, layer2) layer((1, 1, 2)) # (2, 3) layer({"x1": 1, "x2": 1, "x3": 2}) # {"y1": 2, "y2": 3}
- class deepr.layers.combinators.Rename(layer, inputs=None, outputs=None)[source]
Bases:
LayerWrap Layer in a Node to rename inputs / outputs.
Allows you to rename inputs / outputs nodes of a
Layerinstance. This can be useful if you end up with aLayerinstance with inputs and outputs name that are not suitable for your needs.For example:
@deepr.layers.layer(n_in=2, n_out=1) def Add(tensors): x, y = tensors return x + y add = Add(inputs="a, b", outputs="c") layer = deepr.layers.Rename(layer=add, inputs="x, y", outputs="z") layer((1, 1)) # 2 layer({"x": 1, "y": 1}) # {"z": 2}
Note that the same behavior can be achieved using
SelectandDAGas follows:layer = deepr.layers.DAG( deepr.layers.Select(inputs=("x", "y"), outputs=("a", "b")), Add(inputs=("a", "b"), outputs="c"), deepr.layers.Select("c", "z"), )
- class deepr.layers.combinators.Scope(layer, name_or_scope, **kwargs)[source]
Bases:
LayerAdd variable scoping to layer.
- class deepr.layers.combinators.Select(inputs=None, outputs=None, indices=None, n_in=None)[source]
Bases:
LayerLayer to extract inputs / outputs from previous layers
The
Selectlayer is particularly useful when defining arbitrary DAGs of layers : it is a convenient way to select which nodes should be inputs, and which should be outputs. For example:layer = deepr.layers.Select(inputs=("x", "y"), outputs="z", n_in=2, indices=1) layer((1, 2)) # 2 layer({"x": 1, "y": 2}) # {"z": 2}
See
DAGdocumentation for more precisions.
deepr.layers.core module
Core Layers
- class deepr.layers.core.Add[source]
Bases:
LayerAdd two tensors of any compatible shapes.
- forward(tensors, mode: Optional[str] = None)
Forward method on one Tensor or a tuple of Tensors.
- Parameters:
tensors (Union[tf.Tensor, Tuple[tf.Tensor, ...]]) –
n_in = 1: one tensor (NOT wrapped in a tuple)
n_in > 1: a tuple of tensors
mode (str, optional) – Description
- Returns:
n_out = 1: one tensor (NOT wrapped in a tuple)
n_out > 1: a tuple of tensors
- Return type:
Union[tf.Tensor, Tuple[tf.Tensor, …]]
- class deepr.layers.core.AddWithWeight(start, end=None, steps=None)[source]
Bases:
LayerCompute loss + beta * KL, decay beta linearly during training.
- forward(tensors, mode: Optional[str] = None)
Forward method on one Tensor or a tuple of Tensors.
- Parameters:
tensors (Union[tf.Tensor, Tuple[tf.Tensor, ...]]) –
n_in = 1: one tensor (NOT wrapped in a tuple)
n_in > 1: a tuple of tensors
mode (str, optional) – Description
- Returns:
n_out = 1: one tensor (NOT wrapped in a tuple)
n_out > 1: a tuple of tensors
- Return type:
Union[tf.Tensor, Tuple[tf.Tensor, …]]
- class deepr.layers.core.Concat(axis=-1)[source]
Bases:
LayerConcatenate tensors on axis
- forward(tensors, mode: Optional[str] = None)
Forward method on one Tensor or a tuple of Tensors.
- Parameters:
tensors (Union[tf.Tensor, Tuple[tf.Tensor, ...]]) –
n_in = 1: one tensor (NOT wrapped in a tuple)
n_in > 1: a tuple of tensors
mode (str, optional) – Description
- Returns:
n_out = 1: one tensor (NOT wrapped in a tuple)
n_out > 1: a tuple of tensors
- Return type:
Union[tf.Tensor, Tuple[tf.Tensor, …]]
- class deepr.layers.core.Conv1d(filters, kernel_size, use_bias=True, activation=None, inputs=None, outputs=None, name=None, **kwargs)[source]
Bases:
LayerConv1d Layer
- forward(tensors, mode=None)[source]
Forward method on one Tensor or a tuple of Tensors.
- Parameters:
tensors (Union[tf.Tensor, Tuple[tf.Tensor, ...]]) –
n_in = 1: one tensor (NOT wrapped in a tuple)
n_in > 1: a tuple of tensors
mode (str, optional) – Description
- Returns:
n_out = 1: one tensor (NOT wrapped in a tuple)
n_out > 1: a tuple of tensors
- Return type:
Union[tf.Tensor, Tuple[tf.Tensor, …]]
- class deepr.layers.core.Dense(units, inputs=None, outputs=None, name=None, **kwargs)[source]
Bases:
LayerDense Layer
- forward(tensors, mode=None)[source]
Forward method on one Tensor or a tuple of Tensors.
- Parameters:
tensors (Union[tf.Tensor, Tuple[tf.Tensor, ...]]) –
n_in = 1: one tensor (NOT wrapped in a tuple)
n_in > 1: a tuple of tensors
mode (str, optional) – Description
- Returns:
n_out = 1: one tensor (NOT wrapped in a tuple)
n_out > 1: a tuple of tensors
- Return type:
Union[tf.Tensor, Tuple[tf.Tensor, …]]
- class deepr.layers.core.DenseIndex(units, kernel_name, bias_name=None, activation=None, reuse=None, kernel_reuse=None, bias_reuse=None, trainable=True, kernel_trainable=None, bias_trainable=None, initializer=None, kernel_initializer=None, bias_initializer=None, **kwargs)[source]
Bases:
LayerDense Index layer.
Given a matrix A, and biases, a classical dense layer computes d = activation(Ax + b), which is a vector of dimension units.
The DenseIndex layer computes only some entries of the resulting vector. In other words, if
indices : shape = [batch, num_indices]
x : shape = [batch, d]
- then, DenseIndex()(x, indices) returns
h : shape = [batch, num_indices] with h[b, i] = d[b, indices[b, i]]
- forward(tensors, mode=None)[source]
Forward method on one Tensor or a tuple of Tensors.
- Parameters:
tensors (Union[tf.Tensor, Tuple[tf.Tensor, ...]]) –
n_in = 1: one tensor (NOT wrapped in a tuple)
n_in > 1: a tuple of tensors
mode (str, optional) – Description
- Returns:
n_out = 1: one tensor (NOT wrapped in a tuple)
n_out > 1: a tuple of tensors
- Return type:
Union[tf.Tensor, Tuple[tf.Tensor, …]]
- class deepr.layers.core.DotProduct(n_in=2, **kwargs)[source]
Bases:
LayerDot Product on the last dimension of the input vectors.
It will add missing dimensions to the before last dimension. For example, if
t1: shape = [batch, num_target, 100]
t2: shape = [batch, 100]
It will return
- t: shape = [batch, num_target], where
t[i, j] = sum_k(t1[i, k] * t2[i, j, k])
- class deepr.layers.core.ExpandDims(axis=-1)[source]
Bases:
Layer- forward(tensors, mode: Optional[str] = None)
Forward method on one Tensor or a tuple of Tensors.
- Parameters:
tensors (Union[tf.Tensor, Tuple[tf.Tensor, ...]]) –
n_in = 1: one tensor (NOT wrapped in a tuple)
n_in > 1: a tuple of tensors
mode (str, optional) – Description
- Returns:
n_out = 1: one tensor (NOT wrapped in a tuple)
n_out > 1: a tuple of tensors
- Return type:
Union[tf.Tensor, Tuple[tf.Tensor, …]]
- class deepr.layers.core.Identity(inputs=None, name=None)[source]
Bases:
LayerIdentity Layer
- forward(tensors, mode=None)[source]
Forward method on one Tensor or a tuple of Tensors.
- Parameters:
tensors (Union[tf.Tensor, Tuple[tf.Tensor, ...]]) –
n_in = 1: one tensor (NOT wrapped in a tuple)
n_in > 1: a tuple of tensors
mode (str, optional) – Description
- Returns:
n_out = 1: one tensor (NOT wrapped in a tuple)
n_out > 1: a tuple of tensors
- Return type:
Union[tf.Tensor, Tuple[tf.Tensor, …]]
- class deepr.layers.core.LogicalAnd[source]
Bases:
LayerPerform logical_and on two tensors of compatible shapes.
- forward(tensors, mode: Optional[str] = None)
Forward method on one Tensor or a tuple of Tensors.
- Parameters:
tensors (Union[tf.Tensor, Tuple[tf.Tensor, ...]]) –
n_in = 1: one tensor (NOT wrapped in a tuple)
n_in > 1: a tuple of tensors
mode (str, optional) – Description
- Returns:
n_out = 1: one tensor (NOT wrapped in a tuple)
n_out > 1: a tuple of tensors
- Return type:
Union[tf.Tensor, Tuple[tf.Tensor, …]]
- class deepr.layers.core.LogicalOr[source]
Bases:
LayerPerform logical_or on two tensors of compatible shapes.
- forward(tensors, mode: Optional[str] = None)
Forward method on one Tensor or a tuple of Tensors.
- Parameters:
tensors (Union[tf.Tensor, Tuple[tf.Tensor, ...]]) –
n_in = 1: one tensor (NOT wrapped in a tuple)
n_in > 1: a tuple of tensors
mode (str, optional) – Description
- Returns:
n_out = 1: one tensor (NOT wrapped in a tuple)
n_out > 1: a tuple of tensors
- Return type:
Union[tf.Tensor, Tuple[tf.Tensor, …]]
- class deepr.layers.core.Normalize(norm=2, axis=None)[source]
Bases:
Layer- forward(tensors, mode: Optional[str] = None)
Forward method on one Tensor or a tuple of Tensors.
- Parameters:
tensors (Union[tf.Tensor, Tuple[tf.Tensor, ...]]) –
n_in = 1: one tensor (NOT wrapped in a tuple)
n_in > 1: a tuple of tensors
mode (str, optional) – Description
- Returns:
n_out = 1: one tensor (NOT wrapped in a tuple)
n_out > 1: a tuple of tensors
- Return type:
Union[tf.Tensor, Tuple[tf.Tensor, …]]
- class deepr.layers.core.Scale(multiplier)[source]
Bases:
LayerScale tensor by multiplier.
- forward(tensors, mode: Optional[str] = None)
Forward method on one Tensor or a tuple of Tensors.
- Parameters:
tensors (Union[tf.Tensor, Tuple[tf.Tensor, ...]]) –
n_in = 1: one tensor (NOT wrapped in a tuple)
n_in > 1: a tuple of tensors
mode (str, optional) – Description
- Returns:
n_out = 1: one tensor (NOT wrapped in a tuple)
n_out > 1: a tuple of tensors
- Return type:
Union[tf.Tensor, Tuple[tf.Tensor, …]]
- class deepr.layers.core.Softmax(n_in=2, n_out=1, **kwargs)[source]
Bases:
LayerApply softmax to the last dimension of tensor with filtering masked values
- class deepr.layers.core.ToFloat[source]
Bases:
LayerCast tensor to float32
- forward(tensors, mode: Optional[str] = None)
Forward method on one Tensor or a tuple of Tensors.
- Parameters:
tensors (Union[tf.Tensor, Tuple[tf.Tensor, ...]]) –
n_in = 1: one tensor (NOT wrapped in a tuple)
n_in > 1: a tuple of tensors
mode (str, optional) – Description
- Returns:
n_out = 1: one tensor (NOT wrapped in a tuple)
n_out > 1: a tuple of tensors
- Return type:
Union[tf.Tensor, Tuple[tf.Tensor, …]]
deepr.layers.dropout module
Dropout Layers
deepr.layers.embedding module
Partitioned Embedding Layer
- class deepr.layers.embedding.CombineEmbeddings(mode, output_dim, project=True)[source]
Bases:
LayerCombine Embeddings Layers
- forward(tensors, mode: Optional[str] = None)
Forward method on one Tensor or a tuple of Tensors.
- Parameters:
tensors (Union[tf.Tensor, Tuple[tf.Tensor, ...]]) –
n_in = 1: one tensor (NOT wrapped in a tuple)
n_in > 1: a tuple of tensors
mode (str, optional) – Description
- Returns:
n_out = 1: one tensor (NOT wrapped in a tuple)
n_out > 1: a tuple of tensors
- Return type:
Union[tf.Tensor, Tuple[tf.Tensor, …]]
deepr.layers.lookup module
Lookup Utilities and Layer
- class deepr.layers.lookup.Lookup(table_initializer_fn, **kwargs)[source]
Bases:
LayerLookup Layer.
- table_initializer_fn
Function that creates a table
- Type:
Callable[[], tf.contrib.lookup.HashTable]
- class deepr.layers.lookup.LookupFromFile(table_name, path, key_dtype=None, reuse=False, **kwargs)[source]
Bases:
LookupLookup From File Layer.
Creates a table at runtime from a mapping file. The table will map each key to its corresponding line index as an tf.int64.
- key_dtype
Keys type
- Type:
tf.DType
- class deepr.layers.lookup.LookupFromMapping(table_name, mapping, default_value=None, key_dtype=None, value_dtype=None, reuse=False, **kwargs)[source]
Bases:
LookupLookup From Mapping Layer.
- default_value
Default value for missing keys
- Type:
Any
- key_dtype
Keys type
- Type:
tf.DType
- mapping
Mapping keys -> index
- Type:
Dict[Any, Any]
- value_dtype
Values type
- Type:
tf.DType
- class deepr.layers.lookup.LookupIndexToString(table_name, path=None, vocab_size=None, default_value='UNK', reuse=False, **kwargs)[source]
Bases:
LookupLookup Index To String.
Creates a table at runtime from a mapping file. The table will map each key to its corresponding line index as an tf.int64.
- default_value
Default Value for missing keys
- Type:
Any
deepr.layers.lstm module
LSTM layers.
- class deepr.layers.lstm.LSTM(num_units, bidirectional=False, **kwargs)[source]
Bases:
LayerLSTM layer.
- forward(tensors, mode: Optional[str] = None)
Forward method on one Tensor or a tuple of Tensors.
- Parameters:
tensors (Union[tf.Tensor, Tuple[tf.Tensor, ...]]) –
n_in = 1: one tensor (NOT wrapped in a tuple)
n_in > 1: a tuple of tensors
mode (str, optional) – Description
- Returns:
n_out = 1: one tensor (NOT wrapped in a tuple)
n_out > 1: a tuple of tensors
- Return type:
Union[tf.Tensor, Tuple[tf.Tensor, …]]
deepr.layers.mask module
Masking Layers
- class deepr.layers.mask.BooleanReduceMode(value)[source]
Bases:
EnumBoolean Reduce Mode
- AND = 'and'
- OR = 'or'
deepr.layers.multi module
Negative Multinomial Log Likelihood.
deepr.layers.multi_css module
Multinomial Log Likelihood with Complementarity Sum Sampling.
- class deepr.layers.multi_css.MultiLogLikelihoodCSS(vocab_size, **kwargs)[source]
Bases:
LayerMultinomial Log Likelihood with Complementarity Sum Sampling.
http://proceedings.mlr.press/v54/botev17a/botev17a.pdf
- forward(tensors, mode=None)[source]
Multinomial Log Likelihood with Complementarity Sum Sampling.
- Parameters:
tensors (Tuple[tf.Tensor, tf.Tensor, tf.Tensor, tf.Tensor]) –
positive_logits: (batch, num_positives)
negative_logits: (batch, num_positives or 1, num_negatives)
positive_mask: same shape as positive logits
negative_mask: same shape as negative logits
- Returns:
Multinomial Log-Likelihood with Complementarity Sampling
- Return type:
tf.Tensor
deepr.layers.nce_loss module
Negative Sampling Loss Layer
- class deepr.layers.nce_loss.MaskedNegativeSampling(**kwargs)[source]
Bases:
LayerMasked Negative Sampling Loss Layer.Loss
Expected value at beginning of training : -2 * log(0.5) = 1.38
- forward(tensors, mode=None)[source]
Forward method of the layer
- Parameters:
tensors (Tuple[tf.Tensor]) –
positives : shape = (batch, num_events)
negatives : shape = (batch, num_events, num_negatives)
mask : shape = (batch, num_events, num_negatives)
weights : shape = (batch, num_events)
- Returns:
Negative Sampling loss
- Return type:
tf.Tensor
- class deepr.layers.nce_loss.NegativeSampling(**kwargs)[source]
Bases:
LayerVanilla Negative Sampling Loss Layer.Loss
Expected value at beginning of training : -2 * log(0.5) = 1.38
- forward(tensors, mode=None)[source]
Forward method of the layer (details: https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf)
- Parameters:
tensors (Tuple[tf.Tensor]) –
positives : shape = (batch, num_events)
negatives : shape = (batch, num_events, num_negatives)
- Returns:
Negative Sampling loss
- Return type:
tf.Tensor
deepr.layers.reduce module
Reduce Layers
deepr.layers.size module
Size Layers
deepr.layers.slice module
Slicing Layers
deepr.layers.sparse module
Sparse Layers
deepr.layers.string module
String Layers
deepr.layers.top_one module
Top1 Loss Layer
- class deepr.layers.top_one.MaskedTopOne(bpr_max_regularizer=0.0, **kwargs)[source]
Bases:
LayerMasked Top1 Loss Layer
- forward(tensors, mode=None)[source]
Forward method of the layer (details: https://arxiv.org/pdf/1706.03847.pdf)
- Parameters:
tensors (Tuple[tf.Tensor]) –
positives : shape = (batch, num_events)
negatives : shape = (batch, num_events, num_negatives)
mask : shape = (batch, num_events, num_negatives)
weights : shape = (batch, num_events)
- Returns:
Top1 loss
- Return type:
tf.Tensor
deepr.layers.top_one_max module
TopOne Max Loss Layer
- class deepr.layers.top_one_max.MaskedTopOneMax(bpr_max_regularizer=0.0, **kwargs)[source]
Bases:
LayerMasked TopOne Max Loss Layer
- forward(tensors, mode=None)[source]
Forward method of the layer (details: https://arxiv.org/pdf/1706.03847.pdf)
- Parameters:
tensors (Tuple[tf.Tensor]) –
positives : shape = (batch, num_events)
negatives : shape = (batch, num_events, num_negatives)
mask : shape = (batch, num_events, num_negatives)
weights : shape = (batch, num_events)
- Returns:
TopOne Max loss
- Return type:
tf.Tensor
deepr.layers.transformer module
Transformer Model.
- class deepr.layers.transformer.AttentionMask(use_look_ahead_mask)[source]
Bases:
LayerCompute Attention Mask.
- Parameters:
tensors (tf.Tensor) – Shape = [batch_size, sequence_length]
use_look_ahead_mask (bool) – Add look ahead mask if True
- Returns:
Shape = [batch_size, sequence_length, sequence_length]
- Return type:
tf.Tensor
- forward(tensors, mode: Optional[str] = None)
Forward method on one Tensor or a tuple of Tensors.
- Parameters:
tensors (Union[tf.Tensor, Tuple[tf.Tensor, ...]]) –
n_in = 1: one tensor (NOT wrapped in a tuple)
n_in > 1: a tuple of tensors
mode (str, optional) – Description
- Returns:
n_out = 1: one tensor (NOT wrapped in a tuple)
n_out > 1: a tuple of tensors
- Return type:
Union[tf.Tensor, Tuple[tf.Tensor, …]]
- deepr.layers.transformer.FeedForward(inputs, outputs, units_inner, units_readout, dim, dropout_rate)[source]
FeedForward Layer.
- class deepr.layers.transformer.Normalization(epsilon=1e-08)[source]
Bases:
LayerNormalization Layer.
- forward(tensors, mode: Optional[str] = None)
Forward method on one Tensor or a tuple of Tensors.
- Parameters:
tensors (Union[tf.Tensor, Tuple[tf.Tensor, ...]]) –
n_in = 1: one tensor (NOT wrapped in a tuple)
n_in > 1: a tuple of tensors
mode (str, optional) – Description
- Returns:
n_out = 1: one tensor (NOT wrapped in a tuple)
n_out > 1: a tuple of tensors
- Return type:
Union[tf.Tensor, Tuple[tf.Tensor, …]]
- class deepr.layers.transformer.PositionalEncoding(max_sequence_length=10000, trainable=False)[source]
Bases:
LayerAdd Positional Embeddings.
- Parameters:
tensors (tf.Tensor) – Input tensor, [batch_size, sequence_length, emb_dim]
use_positional_encoding (bool) – Use this layer in case of True, skip in case of False
max_sequence_length (int) – Expected that input tensor length doesn’t exceed the max_sequence_length limit
trainable (bool) – Train / not train position encoding
- forward(tensors, mode: Optional[str] = None)
Forward method on one Tensor or a tuple of Tensors.
- Parameters:
tensors (Union[tf.Tensor, Tuple[tf.Tensor, ...]]) –
n_in = 1: one tensor (NOT wrapped in a tuple)
n_in > 1: a tuple of tensors
mode (str, optional) – Description
- Returns:
n_out = 1: one tensor (NOT wrapped in a tuple)
n_out > 1: a tuple of tensors
- Return type:
Union[tf.Tensor, Tuple[tf.Tensor, …]]
- class deepr.layers.transformer.SelfMultiheadAttention(num_heads, dim_head, residual_connection, **kwargs)[source]
Bases:
LayerSelf MultiHead Attention Layer.
- forward(tensors, mode=None)[source]
Compute MultiHead Attention.
- Parameters:
tensors (Tuple[tf.Tensor, tf.Tensor]) – x = [batch_size, sequence_length, dim] mask = [batch_size, sequence_length, sequence_length]
- Returns:
[batch_size, sequence_length, dim]
- Return type:
tf.Tensor
- scaled_dot_attention(query, key, value, mask=None)[source]
Compute Scaled Dot Attention.
- Parameters:
query (tf.Tensor) – Shape = [batch, num_heads, sequence_length, dim_head]
key (tf.Tensor) – Shape = [batch, num_heads, sequence_length, dim_head]
value (tf.Tensor) – Shape = [batch, num_heads, sequence_length, dim_head]
mask (tf.Tensor, optional) – Shape = [batch, sequence_length, sequence_length]
- Returns:
shape = [batch, heads, sequence_length, d]
- Return type:
tf.Tensor
- deepr.layers.transformer.Transformer(dim, num_heads=4, encoding_blocks=2, dim_head=128, residual_connection=True, use_layer_normalization=True, event_dropout_rate=0.0, use_feedforward=True, ff_dropout_rate=0.0, ff_normalization=False, scale=False, use_positional_encoding=True, trainable_positional_encoding=True, use_look_ahead_mask=True, inputs=('inputEmbeddings', 'inputMask'), outputs='userEmbeddings')[source]
Transformer Model.
- Return type:
deepr.layers.triplet_precision module
Triplet Precision Layer.
- class deepr.layers.triplet_precision.TripletPrecision(**kwargs)[source]
Bases:
LayerTriplet Precision Layer.
- forward(tensors, mode=None)[source]
Computes Triplet Precision
- Parameters:
tensors (Tuple[tf.Tensor]) –
positives : shape = (batch, num_events)
negatives : shape = (batch, num_events, num_negatives)
mask : shape = (batch, num_events, num_negatives)
weights : shape = (batch, num_events)
- Returns:
BPR loss
- Return type:
tf.Tensor