ncdl.nn.LatticeGroupNorm
- class ncdl.nn.LatticeGroupNorm(*args: Any, **kwargs: Any)
Applies Group Normalization over a mini-batch of inputs as described in the paper Group Normalization .. math:
y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta
The input channels are separated into
num_groups
groups, each containingnum_channels / num_groups
channels.num_channels
must be divisible bynum_groups
. The mean and standard-deviation are calculated separately over the each group. \(\gamma\) and \(\beta\) are learnable per-channel affine transform parameter vectors of sizenum_channels
ifaffine
isTrue
. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False). This layer uses statistics computed from input data in both training and evaluation modes. :param num_groups: number of groups to separate the channels into :type num_groups: int :param num_channels: number of channels expected in input :type num_channels: int :param eps: a value added to the denominator for numerical stability. Default: 1e-5 :param affine: a boolean value that when set toTrue
, this modulehas learnable per-channel affine parameters initialized to ones (for weights) and zeros (for biases). Default:
True
.- Shape:
Input: \((N, C, *)\) where \(C=\text{num\_channels}\)
Output: \((N, C, *)\) (same shape as input)
- Examples::
>>> input = torch.randn(20, 6, 10, 10) >>> # Separate 6 channels into 3 groups >>> m = nn.GroupNorm(3, 6) >>> # Separate 6 channels into 6 groups (equivalent with InstanceNorm) >>> m = nn.GroupNorm(6, 6) >>> # Put all 6 channels into a single group (equivalent with LayerNorm) >>> m = nn.GroupNorm(1, 6) >>> # Activating the module >>> output = m(input)
- __init__(lattice: Lattice, num_groups: int, num_channels: int, eps: float = 1e-05, affine: bool = True, device=None, dtype=None) None
Methods
__init__
(lattice, num_groups, num_channels)extra_repr
()forward
(input)reset_parameters
()Attributes
num_groups
num_channels
eps
affine