benzakenelad / BitFitLinks
Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models
☆142Updated 2 years ago
Alternatives and similar repositories for BitFit
Users that are interested in BitFit are comparing it to the libraries listed below
Sorting:
- ☆128Updated 2 years ago
- The original Backpack Language Model implementation, a fork of FlashAttention☆67Updated 2 years ago
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆101Updated 2 years ago
- ☆85Updated 2 years ago
- This is the implementation of the paper AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning (https://arxiv.org/abs/2205.1…☆132Updated last year
- contrastive decoding☆201Updated 2 years ago
- ☆67Updated 3 years ago
- [ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.☆100Updated 2 years ago
- Code for "SemDeDup", a simple method for identifying and removing semantic duplicates from a dataset (data pairs which are semantically s…☆136Updated last year
- Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022☆59Updated 3 years ago
- ☆131Updated 10 months ago
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆39Updated last year
- Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration 🚃☆114Updated 2 years ago
- DEMix Layers for Modular Language Modeling☆53Updated 3 years ago
- ☆50Updated last year
- ☆34Updated 2 years ago
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆131Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆51Updated 2 years ago
- Code release for Dataless Knowledge Fusion by Merging Weights of Language Models (https://openreview.net/forum?id=FCnohuR6AnM)☆89Updated last year
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆77Updated 2 years ago
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆175Updated 11 months ago
- ☆78Updated 2 years ago
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆91Updated 3 years ago
- ☆54Updated 2 years ago
- Code for ACL2023 paper: Pre-Training to Learn in Context☆108Updated 10 months ago
- The official code of EMNLP 2022, "SCROLLS: Standardized CompaRison Over Long Language Sequences".☆69Updated last year
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆195Updated 2 years ago
- ☆62Updated 2 years ago
- An original implementation of "MetaICL Learning to Learn In Context" by Sewon Min, Mike Lewis, Luke Zettlemoyer and Hannaneh Hajishirzi☆263Updated 2 years ago
- ☆95Updated last year