dashstander / block-recurrent-transformerLinks
Pytorch implementation of "Block Recurrent Transformers" (Hutchins & Schlag et al., 2022)
☆85Updated 3 years ago
Alternatives and similar repositories for block-recurrent-transformer
Users that are interested in block-recurrent-transformer are comparing it to the libraries listed below
Sorting:
- A quick walk-through of the innards of LSTMs and a naive implementation of the Mogrifier LSTM paper in PyTorch☆78Updated 5 years ago
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆60Updated 5 years ago
- [EMNLP'19] Summary for Transformer Understanding☆53Updated 5 years ago
- A pytorch &keras implementation and demo of Fastformer.☆189Updated 3 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆123Updated 4 years ago
- Implementation of ETSformer, state of the art time-series Transformer, in Pytorch☆156Updated 2 years ago
- This is a code repository for the ACL 2022 paper "ODE Transformer: An Ordinary Differential Equation-Inspired Model for Sequence Generati…☆35Updated 3 years ago
- 一些RNN的实现☆51Updated 2 years ago
- Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning☆165Updated last year
- Implementation of Memory-Compressed Attention, from the paper "Generating Wikipedia By Summarizing Long Sequences"☆69Updated 2 years ago
- ☆84Updated 5 years ago
- ☆67Updated last year
- Transformer-based Conditional Variational Autoencoder for Controllable Story Generation☆158Updated 3 years ago
- Repository for Multimodal AutoML Benchmark☆65Updated 3 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆101Updated 2 years ago
- code for Explicit Sparse Transformer☆61Updated 2 years ago
- ☆51Updated 2 years ago
- Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data☆57Updated 4 years ago
- Implement the paper "Self-Attention with Relative Position Representations"☆139Updated 4 years ago
- Code for the PAPA paper☆27Updated 2 years ago
- An implementation of Transformer with Expire-Span, a circuit for learning which memories to retain☆34Updated 4 years ago
- ☆33Updated 4 years ago
- 基于Transformer的单模型、多尺度的VAE模型☆57Updated 4 years ago
- Efficient Transformers with Dynamic Token Pooling☆64Updated 2 years ago
- ☆72Updated 4 years ago
- Sequence Modeling with Structured State Spaces☆66Updated 3 years ago
- A *tuned* minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆117Updated 4 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆63Updated 3 years ago
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆125Updated last year
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 5 years ago