kzl / universal-computationLinks
Official codebase for Pretrained Transformers as Universal Computation Engines.
☆247Updated 3 years ago
Alternatives and similar repositories for universal-computation
Users that are interested in universal-computation are comparing it to the libraries listed below
Sorting:
- Trains Transformer model variants. Data isn't shuffled between batches.☆144Updated 2 years ago
- Pytorch implementation of Compressive Transformers, from Deepmind☆158Updated 3 years ago
- An attempt at the implementation of Glom, Geoffrey Hinton's new idea that integrates concepts from neural fields, top-down-bottom-up proc…☆194Updated 4 years ago
- Implementation of Feedback Transformer in Pytorch☆107Updated 4 years ago
- Fully featured implementation of Routing Transformer☆294Updated 3 years ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆105Updated 3 years ago
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆225Updated 3 years ago
- ☆376Updated last year
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆151Updated last year
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆67Updated 2 years ago
- VQVAEs, GumbelSoftmaxes and friends☆566Updated 3 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆119Updated 3 years ago
- Implicit MLE: Backpropagating Through Discrete Exponential Family Distributions☆258Updated last year
- Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper☆81Updated 3 years ago
- Unofficial implementation of Perceiver IO☆121Updated 2 years ago
- Is the attention layer even necessary? (https://arxiv.org/abs/2105.02723)☆485Updated 4 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆118Updated 4 years ago
- A library for evaluating representations.☆76Updated 3 years ago
- Understanding the Difficulty of Training Transformers☆329Updated 3 years ago
- Code for the paper, "Distribution Augmentation for Generative Modeling", ICML 2020.☆124Updated 2 years ago
- ☆208Updated 2 years ago
- Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention☆264Updated 3 years ago
- Implementation of Hierarchical Transformer Memory (HTM) for Pytorch☆74Updated 3 years ago
- MERLOT: Multimodal Neural Script Knowledge Models☆224Updated 3 years ago
- [ICML 2021 Oral] We show pure attention suffers rank collapse, and how different mechanisms combat it.☆165Updated 4 years ago
- Understanding Training Dynamics of Deep ReLU Networks☆293Updated 3 weeks ago
- Official PyTorch implementation of the paper "Self-Supervised Relational Reasoning for Representation Learning", NeurIPS 2020 Spotlight.☆143Updated last year
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆100Updated 2 years ago
- GPT, but made only out of MLPs☆89Updated 4 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆120Updated 3 years ago