sibyllema / Conservation_laws
Code for the paper: "Abide by the Law and Follow the Flow: Conservation Laws for Gradient Flows".
☆12Updated last year
Alternatives and similar repositories for Conservation_laws:
Users that are interested in Conservation_laws are comparing it to the libraries listed below
- Euclidean Wasserstein-2 optimal transportation☆47Updated last year
- Implementation of the Gromov-Wasserstein distance to the setting of Unbalanced Optimal Transport☆44Updated 2 years ago
- Implementation of Action Matching for the Schrödinger equation☆24Updated last year
- ☆17Updated last year
- Implementation of Action Matching☆41Updated last year
- A set of tests for evaluating large-scale algorithms for Wasserstein-2 transport maps computation (NeurIPS 2021)☆40Updated 2 years ago
- Kernel Stein Discrepancy Descent : a method to sample from unnormalized densities☆22Updated last year
- ☆17Updated 3 years ago
- ☆23Updated 2 years ago
- ☆28Updated 3 weeks ago
- [ICML 2024] Official implementation for "Beyond ELBOs: A Large-Scale Evaluation of Variational Methods for Sampling".☆30Updated 4 months ago
- This repository contains code for applying Riemannian geometry in machine learning.☆77Updated 3 years ago
- Proximal Optimal Transport Modeling of Population Dynamics (AISTATS 2022)☆18Updated last year
- Code for the paper: "Independent mechanism analysis, a new concept?"☆24Updated last year
- Supporing code for the paper "Bayesian Model Selection, the Marginal Likelihood, and Generalization".☆35Updated 2 years ago
- Pytorch code for "Improving Self-Supervised Learning by Characterizing Idealized Representations"☆41Updated 2 years ago
- Deep Generalized Schrödinger Bridge, NeurIPS 2022 Oral☆47Updated 2 years ago
- ☆19Updated 2 months ago
- Official implementation of Deep Momentum Schrödinger Bridge☆25Updated last year
- ☆12Updated 9 months ago
- PyTorch implementation for our ICLR 2024 paper "Diffusion Generative Flow Samplers: Improving learning signals through partial trajectory…☆24Updated last year
- Code for "Invariance Learning in Deep Neural Networks with Differentiable Laplace Approximations"☆23Updated 2 years ago
- Deep Networks Grok All the Time and Here is Why☆34Updated 11 months ago
- Posterior Refinement Improves Sample Efficiency in Bayesian Neural Networks☆10Updated 2 years ago
- ☆17Updated last year
- ☆32Updated 10 months ago
- ☆38Updated 4 years ago
- Laplace Redux -- Effortless Bayesian Deep Learning☆43Updated 2 years ago
- ☆28Updated 8 months ago
- ☆12Updated 2 years ago