torchdyn.nn

Layers and utilities for torchdyn models

Module contents

class torchdyn.nn.Augmenter(augment_idx=1, augment_dims=5, augment_func=None, order='first')[source]

Augmentation class. Can handle several types of augmentation strategies for Neural DEs.

Parameters
  • augment_dims (int) – number of augmented dimensions to initialize

  • augment_idx (int) – index of dimension to augment

  • augment_func (nn.Module) – nn.Module applied to the input datasets of dimension d to determine the augmented initial condition of dimension d + a. a is defined implicitly in augment_func e.g. augment_func=nn.Linear(2, 5) augments a 2 dimensional input with 3 additional dimensions.

  • order (str) – whether to augment before datasets [augmentation, x] or after [x, augmentation] along dimension augment_idx. Options: (‘first’, ‘last’)

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class torchdyn.nn.Chebychev(deg, adaptive=False)[source]

Eigenbasis expansion using chebychev polynomials.” :param deg: degree of the eigenbasis expansion :type deg: int :param adaptive: does nothing (for now) :type adaptive: bool

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(n_range, s)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class torchdyn.nn.DataControl[source]

Data-control module. Allows for datasets-control inputs at arbitrary points of the DEFunc

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class torchdyn.nn.DepthCat(idx_cat=1)[source]

Depth variable t concatenation module. Allows for easy concatenation of t each call of the numerical solver, at specified nn of the DEFunc.

Parameters

idx_cat (int) – index of the datasets dimension to concatenate t to.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class torchdyn.nn.Fourier(deg, adaptive=False)[source]

Eigenbasis expansion using fourier functions.” :param deg: degree of the eigenbasis expansion :type deg: int :param adaptive: does nothing (for now) :type adaptive: bool

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(n_range, s)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class torchdyn.nn.GalConv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=0, bias=True, expfunc=Fourier(), dilation=True, shift=True)[source]

2D convolutional Galerkin layer for depth–variant neural differential equations. Introduced in https://arxiv.org/abs/2002.08071 :param in_channels: number of channels in the input image :type in_channels: int :param out_channels: number of channels produced by the convolution :type out_channels: int :param kernel_size: size of the convolving kernel :type kernel_size: int :param stride: stride of the convolution. Default: 1 :type stride: int :param padding: zero-padding added to both sides of the input. Default: 0 :type padding: int :param bias: include bias parameter vector in the layer computation :type bias: bool :param expfunc: {‘Fourier’, ‘Polynomial’, ‘Chebychev’, ‘VanillaRBF’, ‘MultiquadRBF’, ‘GaussianRBF’}. Choice of eigenfunction expansion. :type expfunc: str :param dilation: whether to optimize for dilation parameter. Allows the GalLayer to dilate the eigenfunction period. :type dilation: bool :param shift: whether to optimize for shift parameter. Allows the GalLayer to shift the eigenfunction period. :type shift: bool

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(input)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class torchdyn.nn.GalLinear(in_features, out_features, bias=True, expfunc=Fourier(), dilation=True, shift=True)[source]

Linear Galerkin layer for depth–variant neural differential equations. Introduced in https://arxiv.org/abs/2002.08071 :param in_features: input dimensions :type in_features: int :param out_features: output dimensions :type out_features: int :param bias: include bias parameter vector in the layer computation :type bias: bool :param expfunc: {‘Fourier’, ‘Polynomial’, ‘Chebychev’, ‘VanillaRBF’, ‘MultiquadRBF’, ‘GaussianRBF’}. Choice of eigenfunction expansion. :type expfunc: str :param dilation: whether to optimize for dilation parameter. Allows the GalLayer to dilate the eigenfunction period. :type dilation: bool :param shift: whether to optimize for shift parameter. Allows the GalLayer to shift the eigenfunction period. :type shift: bool

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(input)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class torchdyn.nn.GaussianRBF(deg, adaptive=False, eps_scales=2, centers=0)[source]

Eigenbasis expansion using gaussian radial basis functions. $phi(r) = e^{-(eps r)^2}$ with $r := || x - x0 ||_2$” :param deg: degree of the eigenbasis expansion :type deg: int :param adaptive: whether to adjust centers and eps_scales during training. :type adaptive: bool :param eps_scales: scaling in the rbf formula ($eps$) :type eps_scales: int :param centers: centers of the radial basis functions (one per degree). Same center across all degrees. x0 in the radius formulas :type centers: int

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(n_range, s)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class torchdyn.nn.MultiquadRBF(deg, adaptive=False, eps_scales=2, centers=0)[source]

Eigenbasis expansion using multiquadratic radial basis functions.” :param deg: degree of the eigenbasis expansion :type deg: int :param adaptive: whether to adjust centers and eps_scales during training. :type adaptive: bool :param eps_scales: scaling in the rbf formula ($eps$) :type eps_scales: int :param centers: centers of the radial basis functions (one per degree). Same center across all degrees. x0 in the radius formulas :type centers: int

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(n_range, s)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class torchdyn.nn.Polynomial(deg, adaptive=False)[source]

Eigenbasis expansion using polynomials.” :param deg: degree of the eigenbasis expansion :type deg: int :param adaptive: does nothing (for now) :type adaptive: bool

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(n_range, s)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class torchdyn.nn.VanillaRBF(deg, adaptive=False, eps_scales=2, centers=0)[source]

Eigenbasis expansion using vanilla radial basis functions.” :param deg: degree of the eigenbasis expansion :type deg: int :param adaptive: whether to adjust centers and eps_scales during training. :type adaptive: bool :param eps_scales: scaling in the rbf formula ($eps$) :type eps_scales: int :param centers: centers of the radial basis functions (one per degree). Same center across all degrees. x0 in the radius formulas :type centers: int

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(n_range, s)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool