PiotrNawrot / dynamic-poolingLinks
Efficient Transformers with Dynamic Token Pooling
☆66Updated 2 years ago
Alternatives and similar repositories for dynamic-pooling
Users that are interested in dynamic-pooling are comparing it to the libraries listed below
Sorting:
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆138Updated last year
- FairSeq repo with Apollo optimizer☆114Updated 2 years ago
- The original Backpack Language Model implementation, a fork of FlashAttention☆69Updated 2 years ago
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆143Updated 3 years ago
- Evaluation pipeline for the BabyLM Challenge 2023.☆77Updated 2 years ago
- Randomized Positional Encodings Boost Length Generalization of Transformers☆82Updated last year
- ☆130Updated 3 years ago
- ☆67Updated last year
- ☆20Updated 3 years ago
- This is the implementation of the paper AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning (https://arxiv.org/abs/2205.1…☆136Updated 2 years ago
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆34Updated 6 months ago
- ☆98Updated 2 years ago
- ☆31Updated 2 years ago
- No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer Models (ICLR 2022)☆29Updated 3 years ago
- The official code of EMNLP 2022, "SCROLLS: Standardized CompaRison Over Long Language Sequences".☆69Updated last year
- Transformers at any scale☆42Updated last year
- Staged Training for Transformer Language Models☆33Updated 3 years ago
- ☆22Updated 3 years ago
- DEMix Layers for Modular Language Modeling☆54Updated 4 years ago
- ☆11Updated last year
- A library for parameter-efficient and composable transfer learning for NLP with sparse fine-tunings.☆75Updated last year
- Code for "Tracing Knowledge in Language Models Back to the Training Data"☆39Updated 3 years ago
- ☆51Updated last year
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆116Updated last year
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆59Updated 2 years ago
- ☆57Updated last year
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆66Updated last year
- ☆85Updated last year
- ☆45Updated 2 years ago
- ☆76Updated last year