Shortcuts

Contributing to torchdyn and DiffEqML

torchdyn is designed to be a community effort: we welcome all contributions of tutorials, model variants, numerical methods and applications related to continuous and implicit deep learning. We do not have specific style requirements, though we subscribe to many of Jeremy Howard’s ideas.

We use poetry to manage requirements, virtual python environment creation, and packaging. To install poetry, refer to the docs. To set up your dev environment, run poetry install. In example, poetry run pytest will then run all torchdyn tests inside your newly created env.

poetry does not currently offer a way to select torch wheels based on desired cuda and OS, and will install a version without GPU support. For CUDA torch wheels, run poetry run poe force_cuda11, or add your version to pyproject.toml.

If you wish to run jupyter notebooks within your newly created poetry environments, use poetry run ipython kernel install --user --name=torchdyn and switch the notebook kernel.

Choosing what to work on: There is always ongoing work on new features, tests and tutorials. Contributing to any of the above is extremely valuable to us. If you wish to work on additional features not currently WIP, feel free to reach out on Slack or via email. We’ll be available to discuss details.

On the scope of torchdyn and missing features

The scope of the library is currently quite large as it spans deep learning, numerical methods and differential equations. While we have attempted to design a general API for state-of-the-art approaches in the field, not everything has made its way into the library so far, and it is thus possible that certain methods or classes might have to be tuned for specific applications.

We have used torchdyn extensively for our own research and publications, including: * Dissecting Neural ODEs [NeurIPS20, oral] * Hypersolvers [NeurIPS20] * Graph Neural ODEs [AAAI workshop] * Differentiable Multiple Shooting Methods * Neural Hybrid Automata * Optimal Energy Shaping * Learning Stochastic Optimal Policies via Gradient Descent [L-CSS]

Read the Docs v: latest
Versions
latest
stable
Downloads
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.