dashstander / block-recurrent-transformerLinks
Pytorch implementation of "Block Recurrent Transformers" (Hutchins & Schlag et al., 2022)
☆84Updated 3 years ago
Alternatives and similar repositories for block-recurrent-transformer
Users that are interested in block-recurrent-transformer are comparing it to the libraries listed below
Sorting:
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆60Updated 4 years ago
- A quick walk-through of the innards of LSTMs and a naive implementation of the Mogrifier LSTM paper in PyTorch☆78Updated 4 years ago
- A pytorch &keras implementation and demo of Fastformer.☆189Updated 2 years ago
- [EMNLP'19] Summary for Transformer Understanding☆53Updated 5 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆119Updated 4 years ago
- ☆65Updated 10 months ago
- Official code for the NAACL 2022 paper "Fuse It More Deeply! A Variational Transformer with Layer-Wise Latent Variable Inference for Text…☆35Updated 2 years ago
- ☆51Updated 2 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆62Updated 3 years ago
- ☆33Updated 4 years ago
- ☆83Updated 5 years ago
- Implementation of Memory-Compressed Attention, from the paper "Generating Wikipedia By Summarizing Long Sequences"☆71Updated 2 years ago
- code for Explicit Sparse Transformer☆62Updated last year
- Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning☆162Updated last year
- This is a code repository for the ACL 2022 paper "ODE Transformer: An Ordinary Differential Equation-Inspired Model for Sequence Generati…☆34Updated 2 years ago
- Transformer-based Conditional Variational Autoencoder for Controllable Story Generation☆155Updated 3 years ago
- 基于Transformer的单模型、多尺度的VAE模型☆57Updated 4 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆101Updated 2 years ago
- 一些RNN的实现☆50Updated 2 years ago
- Code for the PAPA paper☆27Updated 2 years ago
- Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data☆57Updated 3 years ago
- Language modeling via stochastic processes. Oral @ ICLR 2022.☆138Updated 2 years ago
- ☆89Updated 2 years ago
- Efficient Transformers with Dynamic Token Pooling☆62Updated 2 years ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆105Updated 4 years ago
- Repository for Multimodal AutoML Benchmark☆66Updated 3 years ago
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆123Updated last year
- Standalone Product Key Memory module in Pytorch - for augmenting Transformer models☆82Updated 11 months ago
- An implementation of Transformer with Expire-Span, a circuit for learning which memories to retain☆34Updated 4 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 5 years ago