seraphlabs-ca / MIMLinks
Code for "MIM: Mutual Information Machine" paper.
☆16Updated 2 years ago
Alternatives and similar repositories for MIM
Users that are interested in MIM are comparing it to the libraries listed below
Sorting:
- ☆24Updated last year
- Implementation of the LOSSGRAD optimization algorithm☆15Updated 6 years ago
- Run Pytorch graphs inside Theano graph (and pytorch wrapper for AIS for generative models).☆18Updated 7 years ago
- Open source implementation of SeaRNN (ICLR 2018, https://openreview.net/forum?id=HkUR_y-RZ)☆48Updated 6 years ago
- MTAdam: Automatic Balancing of Multiple Training Loss Terms☆36Updated 4 years ago
- Variational Walkback, NIPS'17☆28Updated 7 years ago
- High performance pytorch modules☆18Updated 2 years ago
- ☆45Updated 5 years ago
- The Variational Homoencoder: Learning to learn high capacity generative models from few examples☆34Updated last year
- Lifelong Variational Autoencoder☆14Updated 7 years ago
- A discrete sequential VAE☆40Updated 5 years ago
- ☆12Updated 5 years ago
- Unofficially Implements https://arxiv.org/abs/2112.05682 to get Linear Memory Cost on Attention for PyTorch☆12Updated 3 years ago
- AdaCat☆49Updated 2 years ago
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Updated 3 years ago
- This repo contains code to reproduce some of the results presented in the paper "SentenceMIM: A Latent Variable Language Model"☆28Updated 3 years ago
- Variational autoencoder in Theano☆12Updated 7 years ago
- ☆24Updated last month
- ☆26Updated 6 years ago
- An implementation of DIP-VAE from the paper "Variational Inference of Disentangled Latent Concepts from Unlabelled Observations" by Kumar…☆26Updated 7 years ago
- ☆28Updated 3 years ago
- Estimating Gradients for Discrete Random Variables by Sampling without Replacement☆40Updated 5 years ago
- ☆34Updated 6 years ago
- NeurIPS 2019 Paper Implementation☆12Updated 2 years ago
- Implementation of REBAR in PyTorch☆17Updated 6 years ago
- ☆13Updated 5 years ago
- Humans understand novel sentences by composing meanings and roles of core language components. In contrast, neural network models for nat…☆27Updated 5 years ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Updated 2 years ago
- ☆14Updated 6 years ago
- ☆21Updated 2 years ago