# TorchDyn Quickstart¶

**TorchDyn is the toolkit for continuous models in PyTorch. Play with state-of-the-art architectures or use its powerful libraries to create your own.**

Central to the `torchdyn`

approach are continuous neural networks, where *width*, *depth* (or both) are taken to their infinite limit. On the optimization front, we consider continuous “data-stream” regimes and gradient flow methods, where the dataset represents a time-evolving signal processed by the neural network to adapt its parameters.

By providing a centralized, easy-to-access collection of model templates, tutorial and application notebooks, we hope to speed-up research in this area and ultimately contribute to turning neural differential equations into an effective tool for control, system identification and common machine learning tasks.

```
[1]:
```

```
import sys ; sys.path.append('../')
from torchdyn.models import *
from torchdyn.datasets import *
from torchdyn import *
```

## Generate data from a static toy dataset¶

We’ll be generating data from toy datasets. In torchdyn, we provide a wide range of datasets often use to benchmark and understand Neural ODEs. Here we will use the classic moons dataset and train a Neural ODE for binary classification

```
[2]:
```

```
d = ToyDataset()
X, yn = d.generate(n_samples=512, noise=1e-1, dataset_type='moons')
```

```
[3]:
```

```
import matplotlib.pyplot as plt
colors = ['orange', 'blue']
fig = plt.figure(figsize=(3,3))
ax = fig.add_subplot(111)
for i in range(len(X)):
ax.scatter(X[i,0], X[i,1], s=1, color=colors[yn[i].int()])
```

Generated data can be easily loaded in the dataloader with standard `PyTorch`

calls

```
[4]:
```

```
import torch
import torch.utils.data as data
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
X_train = torch.Tensor(X).to(device)
y_train = torch.LongTensor(yn.long()).to(device)
train = data.TensorDataset(X_train, y_train)
trainloader = data.DataLoader(train, batch_size=len(X), shuffle=True)
```

We utilize Pytorch Lightning to handle training loops, logging and general bookkeeping. This allows `torchdyn`

and Neural Differential Equations to have access to modern best practices for training and experiment reproducibility.

In particular, we combine modular `torchdyn`

models with `LightningModules`

via a `Learner`

class:

```
[5]:
```

```
import torch.nn as nn
import pytorch_lightning as pl
class Learner(pl.LightningModule):
def __init__(self, model:nn.Module):
super().__init__()
self.model = model
def forward(self, x):
return self.model(x)
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self.model(x)
loss = nn.CrossEntropyLoss()(y_hat, y)
logs = {'train_loss': loss}
return {'loss': loss, 'log': logs}
def configure_optimizers(self):
return torch.optim.Adam(self.model.parameters(), lr=0.01)
def train_dataloader(self):
return trainloader
```

## Define a Neural ODE¶

Analogously to most forward neural models we want to realize a map

where \(\hat y\) becomes the best approximation of a true output \(y\) given an input \(x\). In torchdyn you can define very simple Neural ODE models of the form

by just specifying a neural network \(f\) and giving some simple settings.

**Note:** This Neural ODE model is of *depth-invariant* type as neither \(f\) explicitly depend on \(s\) nor the parameters \(\theta\) are depth-varying. Together with their *depth-variant* counterpart with \(s\) concatenated in the vector field was first proposed and implemented by [Chen T. Q. et al, 2018]

### Define the vector field (DEFunc)¶

The first step is to define any PyTorch `torch.nn.Module`

. This takes the role of the Neural ODE vector field \(f(h,\theta)\)

```
[6]:
```

```
f = nn.Sequential(
nn.Linear(2, 16),
nn.Tanh(),
nn.Linear(16, 2)
)
```

In this case we chose \(f\) to be a simple MLP with one hidden layer and \(\tanh\) activation

### Define the NeuralDE¶

The final step to define a Neural ODE is to instantiate the torchdyn’s class `NeuralDE`

passing some customization arguments and `f`

itself.

In this case we specify: * we compute backward gradients with the `'adjoint'`

method. * we will use the `'dopri5'`

(Dormand-Prince) ODE solver from `torchdiffeq`

;

```
[7]:
```

```
model = NeuralDE(f, sensitivity='adjoint', solver='dopri5').to(device)
```

## Train the Model¶

```
[8]:
```

```
learn = Learner(model)
trainer = pl.Trainer(min_epochs=200, max_epochs=300)
trainer.fit(learn)
```

```
GPU available: True, used: False
TPU available: False, using: 0 TPU cores
| Name | Type | Params
-----------------------------------
0 | model | NeuralDE | 82
```

```
```

```
[8]:
```

```
1
```

With the method `trajectory`

of `NeuralDE`

objects you can quickly evaluate the entire trajectory of each data point in `X_train`

on an interval `s_span`

```
[9]:
```

```
s_span = torch.linspace(0,1,100)
trajectory = model.trajectory(X_train, s_span).detach().cpu()
```

### Plot the Training Results¶

We can first plot the trajectories of the data points in the depth domain \(s\)

```
[10]:
```

```
color=['orange', 'blue']
fig = plt.figure(figsize=(10,2))
ax0 = fig.add_subplot(121)
ax1 = fig.add_subplot(122)
for i in range(500):
ax0.plot(s_span, trajectory[:,i,0], color=color[int(yn[i])], alpha=.1);
ax1.plot(s_span, trajectory[:,i,1], color=color[int(yn[i])], alpha=.1);
ax0.set_xlabel(r"$s$ [Depth]") ; ax0.set_ylabel(r"$h_0(s)$")
ax1.set_xlabel(r"$s$ [Depth]") ; ax1.set_ylabel(r"$z_1(s)$")
ax0.set_title("Dimension 0") ; ax1.set_title("Dimension 1")
```

```
[10]:
```

```
Text(0.5, 1.0, 'Dimension 1')
```

Then the trajectory in the *state-space*

As you can see, the Neural ODE steers the data-points into regions of null loss with a continuous flow in the depth domain. Finally, we can also plot the learned vector field \(f\)

```
[11]:
```

```
# evaluate vector field
n_pts = 50
x = torch.linspace(trajectory[:,:,0].min(), trajectory[:,:,0].max(), n_pts)
y = torch.linspace(trajectory[:,:,1].min(), trajectory[:,:,1].max(), n_pts)
X, Y = torch.meshgrid(x, y) ; z = torch.cat([X.reshape(-1,1), Y.reshape(-1,1)], 1)
f = model.defunc(0,z.to(device)).cpu().detach()
fx, fy = f[:,0], f[:,1] ; fx, fy = fx.reshape(n_pts , n_pts), fy.reshape(n_pts, n_pts)
# plot vector field and its intensity
fig = plt.figure(figsize=(4, 4)) ; ax = fig.add_subplot(111)
ax.streamplot(X.numpy().T, Y.numpy().T, fx.numpy().T, fy.numpy().T, color='black')
ax.contourf(X.T, Y.T, torch.sqrt(fx.T**2+fy.T**2), cmap='RdYlBu')
```

```
[11]:
```

```
<matplotlib.contour.QuadContourSet at 0x1f3458fbe08>
```

**Sweet! You trained your first Neural ODE! Now go on and learn more advanced models with the next tutorials**