kzl / universal-computation
Official codebase for Pretrained Transformers as Universal Computation Engines.
☆245Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for universal-computation
- Pytorch implementation of Compressive Transformers, from Deepmind☆157Updated 3 years ago
- Fully featured implementation of Routing Transformer☆284Updated 3 years ago
- Trains Transformer model variants. Data isn't shuffled between batches.☆141Updated 2 years ago
- An attempt at the implementation of Glom, Geoffrey Hinton's new idea that integrates concepts from neural fields, top-down-bottom-up proc…☆191Updated 3 years ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆150Updated last year
- Implementation of Feedback Transformer in Pytorch☆104Updated 3 years ago
- VQVAEs, GumbelSoftmaxes and friends☆535Updated 3 years ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆100Updated 3 years ago
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆66Updated last year
- ☆365Updated last year
- Understanding Training Dynamics of Deep ReLU Networks☆279Updated 3 weeks ago
- Implicit MLE: Backpropagating Through Discrete Exponential Family Distributions☆258Updated last year
- MERLOT: Multimodal Neural Script Knowledge Models☆223Updated 2 years ago
- [ICML 2021 Oral] We show pure attention suffers rank collapse, and how different mechanisms combat it.☆162Updated 3 years ago
- Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper☆79Updated 3 years ago
- [CVPR 2021] VirTex: Learning Visual Representations from Textual Annotations☆556Updated 10 months ago
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆203Updated last year
- ☆97Updated 2 years ago
- Code for the paper, "Distribution Augmentation for Generative Modeling", ICML 2020.☆121Updated last year
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆222Updated 2 years ago
- Benchmark for Lifelong learning research☆118Updated 3 years ago
- Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention☆253Updated 3 years ago
- An implementation of masked language modeling for Pytorch, made as concise and simple as possible☆177Updated last year
- Self-supervised learning through the eyes of a child☆139Updated 3 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆106Updated 4 years ago
- Understanding the Difficulty of Training Transformers☆328Updated 2 years ago
- Repository for the paper "Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images"☆436Updated last year
- A PyTorch implementation of Perceiver, Perceiver IO and Perceiver AR with PyTorch Lightning scripts for distributed training☆437Updated 10 months ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆95Updated last year
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆116Updated 3 years ago