dashstander / block-recurrent-transformer
Pytorch implementation of "Block Recurrent Transformers" (Hutchins & Schlag et al., 2022)
☆84Updated 3 years ago
Alternatives and similar repositories for block-recurrent-transformer
Users that are interested in block-recurrent-transformer are comparing it to the libraries listed below
Sorting:
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆60Updated 4 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆100Updated 2 years ago
- [EMNLP'19] Summary for Transformer Understanding☆53Updated 5 years ago
- A quick walk-through of the innards of LSTMs and a naive implementation of the Mogrifier LSTM paper in PyTorch☆77Updated 4 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆117Updated 4 years ago
- ☆51Updated 2 years ago
- Implementation of ETSformer, state of the art time-series Transformer, in Pytorch☆153Updated last year
- 一些RNN的实现☆50Updated 2 years ago
- Implementation of Memory-Compressed Attention, from the paper "Generating Wikipedia By Summarizing Long Sequences"☆71Updated 2 years ago
- Implementation of TableFormer, Robust Transformer Modeling for Table-Text Encoding, in Pytorch☆38Updated 3 years ago
- Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning☆161Updated last year
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆64Updated last year
- This is a code repository for the ACL 2022 paper "ODE Transformer: An Ordinary Differential Equation-Inspired Model for Sequence Generati…☆31Updated 2 years ago
- Implement the paper "Self-Attention with Relative Position Representations"☆128Updated 4 years ago
- code for Explicit Sparse Transformer☆62Updated last year
- Sequence Modeling with Structured State Spaces☆64Updated 2 years ago
- Implementation of Block Recurrent Transformer - Pytorch☆217Updated 8 months ago
- Official code for the NAACL 2022 paper "Fuse It More Deeply! A Variational Transformer with Layer-Wise Latent Variable Inference for Text…☆35Updated 2 years ago
- [NeurIPS 2022] Your Transformer May Not be as Powerful as You Expect (official implementation)☆34Updated last year
- Code for "Finetuning Pretrained Transformers into Variational Autoencoders"☆39Updated 2 years ago
- Language modeling via stochastic processes. Oral @ ICLR 2022.☆138Updated 2 years ago
- ☆83Updated 5 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆73Updated 2 years ago
- FairSeq repo with Apollo optimizer☆114Updated last year
- Efficient Transformers with Dynamic Token Pooling☆61Updated last year
- Axial Positional Embedding for Pytorch☆79Updated 2 months ago
- ☆64Updated 8 months ago
- Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data☆56Updated 3 years ago
- 基于Transformer的单模型、多尺度的VAE模型☆55Updated 3 years ago
- Implementations of various linear RNN layers using pytorch and triton☆51Updated last year