Daulbaev / IRDM
☆11Updated 4 years ago
Alternatives and similar repositories for IRDM:
Users that are interested in IRDM are comparing it to the libraries listed below
- Models and code for the ICLR 2020 workshop paper "Towards Understanding Normalization in Neural ODEs"☆16Updated 4 years ago
- Supplementary code for the paper "Meta-Solver for Neural Ordinary Differential Equations" https://arxiv.org/abs/2103.08561☆24Updated 3 years ago
- Source code for Large-Scale Wasserstein Gradient Flows (NeurIPS 2021)☆32Updated 2 years ago
- [NeurIPS'19] Deep Equilibrium Models Jax Implementation☆39Updated 4 years ago
- orbital MCMC☆10Updated 3 years ago
- Riemannian Convex Potential Maps☆67Updated last year
- Gradient-free optimization method for the multidimensional arrays and discretized multivariate functions based on the tensor train (TT) f…☆32Updated last month
- Quadrature-based features for kernel approximation☆16Updated 6 years ago
- Monotone operator equilibrium networks☆51Updated 4 years ago
- ☆18Updated 3 years ago
- [AAAI 2020 Oral] Low-variance Black-box Gradient Estimates for the Plackett-Luce Distribution☆37Updated 4 years ago
- code for "Neural Conservation Laws A Divergence-Free Perspective".☆37Updated 2 years ago
- Neural likelihood-free methods in PyTorch.☆39Updated 5 years ago
- Code for "Exponential Family Estimation via Adversarial Dynamics Embedding" (NeurIPS 2019)☆13Updated 5 years ago
- Convex potential flows☆83Updated 3 years ago
- repo for paper: Adaptive Checkpoint Adjoint (ACA) method for gradient estimation in neural ODE☆54Updated 3 years ago
- Official code for UnICORNN (ICML 2021)☆27Updated 3 years ago
- Riemannian Optimization Using JAX☆48Updated last year
- Code for "'Hey, that's not an ODE:' Faster ODE Adjoints via Seminorms" (ICML 2021)☆86Updated 2 years ago
- ☆20Updated 4 months ago
- ☆49Updated 4 years ago
- PyTorch implementation of Continuously Indexed Flows paper, with many baseline normalising flows☆31Updated 3 years ago
- Implementation of Action Matching for the Schrödinger equation☆24Updated last year
- Implicit Deep Adaptive Design (iDAD): Policy-Based Experimental Design without Likelihoods☆18Updated 3 years ago
- Computing gradients and Hessians of feed-forward networks with GPU acceleration☆18Updated last year
- This repository provides open-source code for sparse continuous distributions and corresponding Fenchel-Young losses.☆16Updated last year
- Source code for my PhD thesis: Backpropagation Beyond the Gradient☆20Updated 2 years ago
- ☆53Updated 7 months ago
- Refining continuous-in-depth neural networks☆39Updated 3 years ago
- Code for the article "What if Neural Networks had SVDs?", to be presented as a spotlight paper at NeurIPS 2020.☆73Updated 7 months ago