Layer API

We define some basic layers to mimic the workflows already present in PyTorch. These layers

Utility Layers

ncdl.nn.LatticeWrap(*args, **kwargs)

ncdl.nn.LatticeUnwrap(*args, **kwargs)

ncdl.nn.LatticePad(*args, **kwargs)

Convolution Layer

ncdl.nn.LatticeConvolution(*args, **kwargs)

Implements the ``convolution'' (technically, cross-correlation) operation for LatticeTensors. This interface is meant to be as similar as possible to nn.Conv2d. However, we don't support dilation and strided convolution. It's possible to do both of these, however the implementation is very intricate (we need to fuse both the downsampling and conv operations). Currently, simply use ncdl.nn.functional.downsample to get the downsample operation.

Pooling Layers

ncdl.nn.LatticeMaxPooling(*args, **kwargs)

Resampling Layers

ncdl.nn.LatticeDownsample(*args, **kwargs)

ncdl.nn.LatticeUpsample(*args, **kwargs)

Activation Layers

ncdl.nn.ReLU(*args, **kwargs)

Applies the rectified linear unit function element-wise:

ncdl.nn.ReLU6(*args, **kwargs)

Applies the element-wise function:

ncdl.nn.RReLU(*args, **kwargs)

Applies the randomized leaky rectified liner unit function, element-wise, as described in the paper:

ncdl.nn.PReLU(*args, **kwargs)

Applies the element-wise function:

ncdl.nn.LeakyReLU(*args, **kwargs)

Applies the element-wise function:

ncdl.nn.Hardtanh(*args, **kwargs)

Applies the HardTanh function element-wise.

ncdl.nn.Tanh(*args, **kwargs)

Applies the Hyperbolic Tangent (Tanh) function element-wise.

ncdl.nn.Tanhshrink(*args, **kwargs)

Applies the element-wise function:

ncdl.nn.Threshold(*args, **kwargs)

Thresholds each element of the input Lattice Tensor.

ncdl.nn.Softmax(*args, **kwargs)

Applies the Softmax function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range [0,1] and sum to 1.

ncdl.nn.Softmin(*args, **kwargs)

Applies the Softmin function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range [0, 1] and sum to 1.

ncdl.nn.Softsign(*args, **kwargs)

Applies the element-wise function:

ncdl.nn.Softplus(*args, **kwargs)

Applies the Softplus function \(\text{Softplus}(x) = \frac{1}{\beta} * \log(1 + \exp(\beta * x))\) element-wise.

ncdl.nn.SELU(*args, **kwargs)

Applied element-wise, as:

ncdl.nn.Sigmoid(*args, **kwargs)

Applies the element-wise function:

ncdl.nn.SiLU(*args, **kwargs)

Applies the Sigmoid Linear Unit (SiLU) function, element-wise.

ncdl.nn.Softshrink(*args, **kwargs)

Applies the soft shrinkage function elementwise:

ncdl.nn.Mish(*args, **kwargs)

Applies the Mish function, element-wise.

ncdl.nn.Hardswish(*args, **kwargs)

Applies the Hardswish function, element-wise, as described in the paper: Searching for MobileNetV3.

ncdl.nn.Hardshrink(*args, **kwargs)

Applies the Hard Shrinkage (Hardshrink) function element-wise.

ncdl.nn.Hardsigmoid(*args, **kwargs)

Applies the Hardsigmoid function element-wise.

Normalization Layers

ncdl.nn.LatticeBatchNorm(*args, **kwargs)

ncdl.nn.LatticeGroupNorm(*args, **kwargs)

Applies Group Normalization over a mini-batch of inputs as described in the paper Group Normalization .. math:: y = frac{x - mathrm{E}[x]}{ sqrt{mathrm{Var}[x] + epsilon}} * gamma + beta The input channels are separated into num_groups groups, each containing num_channels / num_groups channels. num_channels must be divisible by num_groups. The mean and standard-deviation are calculated separately over the each group. \(\gamma\) and \(\beta\) are learnable per-channel affine transform parameter vectors of size num_channels if affine is True. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False). This layer uses statistics computed from input data in both training and evaluation modes. :param num_groups: number of groups to separate the channels into :type num_groups: int :param num_channels: number of channels expected in input :type num_channels: int :param eps: a value added to the denominator for numerical stability. Default: 1e-5 :param affine: a boolean value that when set to True, this module has learnable per-channel affine parameters initialized to ones (for weights) and zeros (for biases). Default: True.

ncdl.nn.LatticeInstanceNorm(*args, **kwargs)