Shortcuts

torchdyn.core

Heart of torchdyn. NeuralODE, ODEProblem, MultipleShootingLayers as well as their utilities are defined here

Module contents

class torchdyn.core.DEFunc(vector_field, order=1)[source]

Special vector field wrapper for Neural ODEs.

Handles auxiliary tasks: time (“depth”) concatenation, higher-order dynamics and forward propagated integral losses.

Parameters
  • vector_field (Callable) – callable defining the dynamics / vector field / dxdt / forcing function

  • order (int, optional) – order of the differential equation. Defaults to 1.

Notes

Currently handles the following: (1) assigns time tensor to each submodule requiring it (e.g. GalLinear). (2) in case of integral losses + reverse-mode differentiation, propagates the loss in the first dimension of x

and automatically splits the Tensor into x[:, 0] and x[:, 1:] for vector field computation

  1. in case of higher-order dynamics, adjusts the vector field forward to recursively compute various orders.

forward(t, x, args={})[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Return type

Tensor

class torchdyn.core.MultipleShootingLayer(vector_field, solver, sensitivity='autograd', maxiter=4, fine_steps=4, solver_adjoint=None, atol_adjoint=1e-06, rtol_adjoint=1e-06, seminorm=False, integral_loss=None)[source]

Multiple Shooting Layer as defined in https://arxiv.org/abs/2106.03885.

Uses parallel-in-time ODE solvers to solve an ODE parametrized by neural network vector_field.

Parameters
  • vector_field ([Callable]) – the vector field, called with vector_field(t, x) for vector_field(x). In the second case, the Callable is automatically wrapped for consistency

  • solver (Union[str, nn.Module]) – parallel-in-time solver, [‘zero’, ‘direct’]

  • sensitivity (str, optional) – Sensitivity method [‘autograd’, ‘adjoint’, ‘interpolated_adjoint’]. Defaults to ‘autograd’.

  • maxiter (int) – number of iterations of the root finding routine defined to parallel solve the ODE.

  • fine_steps (int) – number of fine-solver steps to perform in each subinterval of the parallel solution.

  • solver_adjoint (Union[str, nn.Module, None], optional) – Standard sequential ODE solver for the adjoint system.

  • atol_adjoint (float, optional) – Defaults to 1e-6.

  • rtol_adjoint (float, optional) – Defaults to 1e-6.

  • integral_loss (Union[Callable, None], optional) – Currently not implemented

  • seminorm (bool, optional) – Whether to use seminorms for adaptive stepping in backsolve adjoints. Defaults to False.

Notes

The number of shooting parameters (first dimension in B0) is implicitly defined by passing t_span during forward calls. For example, a t_span=torch.linspace(0, 1, 10) will define 9 intervals and 10 shooting parameters.

For the moment only a thin wrapper around MultipleShootingProblem. At this level will be convenience routines for special initializations of shooting parameters B0, as well as usual convenience checks for integral losses.

class torchdyn.core.MultipleShootingProblem(vector_field, solver, sensitivity='autograd', maxiter=4, fine_steps=4, solver_adjoint=None, atol_adjoint=1e-06, rtol_adjoint=1e-06, seminorm=False, integral_loss=None)[source]

An ODE problem solved with parallel-in-time methods. :param vector_field: the vector field, called with vector_field(t, x) for vector_field(x).

In the second case, the Callable is automatically wrapped for consistency

Parameters
  • solver (str) – parallel-in-time solver.

  • sensitivity (str, optional) – . Defaults to ‘autograd’.

  • solver_adjoint (Union[str, nn.Module, None], optional) – . Defaults to None.

  • atol_adjoint (float, optional) – . Defaults to 1e-6.

  • rtol_adjoint (float, optional) – . Defaults to 1e-6.

  • seminorm (bool, optional) – . Defaults to False.

  • integral_loss (Union[Callable, None], optional) – . Defaults to None.

_autograd_func()[source]

create autograd functions for backward pass

forward(x, t_span, B0=None)[source]

For safety redirects to intended method odeint

odeint(x, t_span, B0=None)[source]

Returns Tuple(t_eval, solution)

torchdyn.core.NeuralDE

alias of NeuralODE

class torchdyn.core.NeuralODE(vector_field, solver='tsit5', order=1, atol=0.001, rtol=0.001, sensitivity='autograd', solver_adjoint=None, atol_adjoint=0.0001, rtol_adjoint=0.0001, interpolator=None, integral_loss=None, seminorm=False, return_t_eval=True, optimizable_params=())[source]

Generic Neural Ordinary Differential Equation.

Parameters
  • vector_field ([Callable]) – the vector field, called with vector_field(t, x) for vector_field(x). In the second case, the Callable is automatically wrapped for consistency

  • solver (Union[str, nn.Module]) –

  • order (int, optional) – Order of the ODE. Defaults to 1.

  • atol (float, optional) – Absolute tolerance of the solver. Defaults to 1e-4.

  • rtol (float, optional) – Relative tolerance of the solver. Defaults to 1e-4.

  • sensitivity (str, optional) – Sensitivity method [‘autograd’, ‘adjoint’, ‘interpolated_adjoint’]. Defaults to ‘autograd’.

  • solver_adjoint (Union[str, nn.Module, None], optional) – ODE solver for the adjoint. Defaults to None.

  • atol_adjoint (float, optional) – Defaults to 1e-6.

  • rtol_adjoint (float, optional) – Defaults to 1e-6.

  • integral_loss (Union[Callable, None], optional) – Defaults to None.

  • seminorm (bool, optional) – Whether to use seminorms for adaptive stepping in backsolve adjoints. Defaults to False.

  • return_t_eval (bool) – Whether to return (t_eval, sol) or only sol. Useful for chaining NeuralODEs in nn.Sequential.

  • optimizable_parameters (Union[Iterable, Generator]) – parameters to calculate sensitivies for. Defaults to ().

Notes

In torchdyn-style, forward calls to a Neural ODE return both a tensor t_eval of time points at which the solution is evaluated as well as the solution itself. This behavior can be controlled by setting return_t_eval to False. Calling trajectory also returns the solution only.

The Neural ODE class automates certain delicate steps that must be done depending on the solver and model used. The prep_odeint method carries out such steps. Neural ODEs wrap ODEProblem.

_prep_integration(x, t_span)[source]

Performs generic checks before integration. Assigns data control inputs and augments state for CNFs

Return type

Tensor

forward(x, t_span=None, save_at=(), args={})[source]

For safety redirects to intended method odeint

class torchdyn.core.NeuralSDE(drift_func, diffusion_func, noise_type='diagonal', sde_type='ito', order=1, sensitivity='autograd', s_span=tensor([0., 1.]), solver='srk', atol=0.0001, rtol=0.0001, ds=0.001, intloss=None)[source]

Generic Neural Stochastic Differential Equation. Follows the same design of the NeuralODE class.

Parameters
  • drift_func ([type]) – drift function

  • diffusion_func ([type]) – diffusion function

  • noise_type (str, optional) – Defaults to ‘diagonal’.

  • sde_type (str, optional) – Defaults to ‘ito’.

  • order (int, optional) – Defaults to 1.

  • sensitivity (str, optional) – Defaults to ‘autograd’.

  • s_span ([type], optional) – Defaults to torch.linspace(0, 1, 2).

  • solver (str, optional) – Defaults to ‘srk’.

  • atol ([type], optional) – Defaults to 1e-4.

  • rtol ([type], optional) – Defaults to 1e-4.

  • ds ([type], optional) – Defaults to 1e-3.

  • intloss ([type], optional) – Defaults to None.

Raises

NotImplementedError – higher-order Neural SDEs are not yet implemented, raised by setting order to >1.

Notes

The current implementation is rougher around the edges compared to NeuralODE, and is not guaranteed to have the same features.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchdyn.core.ODEProblem(vector_field, solver, interpolator=None, order=1, atol=0.0001, rtol=0.0001, sensitivity='autograd', solver_adjoint=None, atol_adjoint=1e-06, rtol_adjoint=1e-06, seminorm=False, integral_loss=None, optimizable_params=())[source]

An ODE Problem coupling a given vector field with solver and sensitivity algorithm to compute gradients w.r.t different quantities.

Parameters
  • vector_field ([Callable]) – the vector field, called with vector_field(t, x) for vector_field(x). In the second case, the Callable is automatically wrapped for consistency

  • solver (Union[str, nn.Module]) –

  • order (int, optional) – Order of the ODE. Defaults to 1.

  • atol (float, optional) – Absolute tolerance of the solver. Defaults to 1e-4.

  • rtol (float, optional) – Relative tolerance of the solver. Defaults to 1e-4.

  • sensitivity (str, optional) – Sensitivity method [‘autograd’, ‘adjoint’, ‘interpolated_adjoint’]. Defaults to ‘autograd’.

  • solver_adjoint (Union[str, nn.Module, None], optional) – ODE solver for the adjoint. Defaults to None.

  • atol_adjoint (float, optional) – Defaults to 1e-6.

  • rtol_adjoint (float, optional) – Defaults to 1e-6.

  • seminorm (bool, optional) – Indicates whether the a seminorm should be used for error estimation during adjoint backsolves. Defaults to False.

  • integral_loss (Union[Callable, None]) – Integral loss to optimize for. Defaults to None.

  • optimizable_parameters (Union[Iterable, Generator]) – parameters to calculate sensitivies for. Defaults to ().

Notes

Integral losses can be passed as generic function or nn.Modules.

_autograd_func()[source]

create autograd functions for backward pass

forward(x, t_span, save_at=(), args={})[source]

For safety redirects to intended method odeint

odeint(x, t_span, save_at=(), args={})[source]

Returns Tuple(t_eval, solution)

class torchdyn.core.SDEProblem[source]

Extension of ODEProblem to SDE

Read the Docs v: latest
Versions
latest
stable
Downloads
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.