ermongroup / fast_feedforward_computationLinks
Official code for "Accelerating Feedforward Computation via Parallel Nonlinear Equation Solving", ICML 2021
☆27Updated 3 years ago
Alternatives and similar repositories for fast_feedforward_computation
Users that are interested in fast_feedforward_computation are comparing it to the libraries listed below
Sorting:
- [NeurIPS'20] Code for the Paper Compositional Visual Generation and Inference with Energy Based Models☆46Updated 2 years ago
- ☆12Updated 2 years ago
- Code for ICLR 2021 Paper, "Anytime Sampling for Autoregressive Models via Ordered Autoencoding"☆26Updated 2 years ago
- Open source code for paper "On the Learning and Learnability of Quasimetrics".☆31Updated 2 years ago
- Blog post☆17Updated last year
- [NeurIPS 2021] Code for Unsupervised Learning of Compositional Energy Concepts☆62Updated 2 years ago
- The official repository for our paper "The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns …☆16Updated 3 months ago
- [ICML'21] Improved Contrastive Divergence Training of Energy Based Models☆65Updated 3 years ago
- An adaptive training algorithm for residual network☆17Updated 5 years ago
- ☆62Updated 2 years ago
- ☆33Updated 2 years ago
- ☆30Updated 4 years ago
- ☆24Updated 4 years ago
- Curse-of-memory phenomenon of RNNs in sequence modelling☆19Updated 4 months ago
- ☆22Updated 3 years ago
- ☆12Updated 6 months ago
- ☆16Updated 2 years ago
- ☆20Updated 5 years ago
- A set of tests for evaluating large-scale algorithms for Wasserstein-1 transport computation (NeurIPS'22).☆21Updated last year
- Code for "Implicit Normalizing Flows" (ICLR 2021 spotlight)☆36Updated 4 years ago
- Code associated with our paper "Learning Group Structure and Disentangled Representations of Dynamical Environments"☆15Updated 2 years ago
- The Official PyTorch Implementation of "VAEBM: A Symbiosis between Variational Autoencoders and Energy-based Models" (ICLR 2021 spotlight…☆57Updated 2 years ago
- Deep Networks Grok All the Time and Here is Why☆37Updated last year
- Leveraging Recursive Gumbel-Max Trick for Approximate Inference in Combinatorial Spaces, NeurIPS 2021☆13Updated 3 years ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆17Updated last year
- Meta Optimal Transport☆103Updated 2 years ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆67Updated 11 months ago
- An implementation of (Induced) Set Attention Block, from the Set Transformers paper☆61Updated 2 years ago
- ☆38Updated last year
- ☆50Updated 4 years ago