PiotrNawrot / dynamic-poolingLinks
Efficient Transformers with Dynamic Token Pooling
☆63Updated 2 years ago
Alternatives and similar repositories for dynamic-pooling
Users that are interested in dynamic-pooling are comparing it to the libraries listed below
Sorting:
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆136Updated last year
- The original Backpack Language Model implementation, a fork of FlashAttention☆69Updated 2 years ago
- Evaluation pipeline for the BabyLM Challenge 2023.☆77Updated last year
- Randomized Positional Encodings Boost Length Generalization of Transformers☆82Updated last year
- ☆66Updated last year
- FairSeq repo with Apollo optimizer☆114Updated last year
- Recurrent Memory Transformer☆150Updated 2 years ago
- This is the implementation of the paper AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning (https://arxiv.org/abs/2205.1…☆132Updated 2 years ago
- ☆129Updated 3 years ago
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆66Updated last year
- ☆45Updated last year
- A Toolkit for Distributional Control of Generative Models☆73Updated 3 weeks ago
- ☆19Updated 2 years ago
- ☆97Updated 2 years ago
- Transformers at any scale☆41Updated last year
- DiffusER: Discrete Diffusion via Edit-based Reconstruction (Reid, Hellendoorn & Neubig, 2022)☆54Updated 3 weeks ago
- ☆75Updated last year
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆33Updated 2 months ago
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆142Updated 2 years ago
- ☆51Updated 2 years ago
- ☆11Updated last year
- ☆31Updated 2 years ago
- 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.☆82Updated 3 years ago
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆114Updated last year
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arx…☆137Updated 2 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆120Updated 4 years ago
- No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer Models (ICLR 2022)☆30Updated 3 years ago
- Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models☆46Updated last year
- ☆49Updated last year
- ☆56Updated last year