PiotrNawrot / dynamic-poolingLinks
Efficient Transformers with Dynamic Token Pooling
☆65Updated 2 years ago
Alternatives and similar repositories for dynamic-pooling
Users that are interested in dynamic-pooling are comparing it to the libraries listed below
Sorting:
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆138Updated last year
- Evaluation pipeline for the BabyLM Challenge 2023.☆77Updated 2 years ago
- The original Backpack Language Model implementation, a fork of FlashAttention☆69Updated 2 years ago
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆34Updated 6 months ago
- FairSeq repo with Apollo optimizer☆114Updated last year
- A Toolkit for Distributional Control of Generative Models☆74Updated 2 weeks ago
- DiffusER: Discrete Diffusion via Edit-based Reconstruction (Reid, Hellendoorn & Neubig, 2022)☆54Updated 4 months ago
- ☆57Updated last year
- ☆31Updated 2 years ago
- ☆52Updated 2 years ago
- ☆130Updated 3 years ago
- Randomized Positional Encodings Boost Length Generalization of Transformers☆83Updated last year
- ☆67Updated last year
- Transformers at any scale☆42Updated last year
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆143Updated 3 years ago
- ☆20Updated 3 years ago
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆66Updated last year
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆115Updated last year
- This is the implementation of the paper AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning (https://arxiv.org/abs/2205.1…☆136Updated 2 years ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆59Updated 2 years ago
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆78Updated 2 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arx…☆138Updated 2 years ago
- ☆98Updated 2 years ago
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆99Updated 4 years ago
- ☆22Updated 3 years ago
- ☆45Updated 2 years ago
- The official code of EMNLP 2022, "SCROLLS: Standardized CompaRison Over Long Language Sequences".☆69Updated last year
- Recurrent Memory Transformer☆154Updated 2 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆101Updated 2 years ago
- ☆72Updated 2 years ago