benzakenelad / BitFitLinks
Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models
☆142Updated 2 years ago
Alternatives and similar repositories for BitFit
Users that are interested in BitFit are comparing it to the libraries listed below
Sorting:
- ☆129Updated 2 years ago
- This is the implementation of the paper AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning (https://arxiv.org/abs/2205.1…☆132Updated last year
- The original Backpack Language Model implementation, a fork of FlashAttention☆69Updated 2 years ago
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆102Updated 2 years ago
- contrastive decoding☆202Updated 2 years ago
- Retrieval as Attention☆83Updated 2 years ago
- ☆87Updated 2 years ago
- ☆139Updated 11 months ago
- ☆52Updated last year
- An original implementation of "MetaICL Learning to Learn In Context" by Sewon Min, Mike Lewis, Luke Zettlemoyer and Hannaneh Hajishirzi☆269Updated 2 years ago
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆108Updated 3 years ago
- Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration 🚃☆114Updated 2 years ago
- ☆156Updated 3 years ago
- ☆95Updated last year
- [NAACL 2022] "Learning to Win Lottery Tickets in BERT Transfer via Task-agnostic Mask Training", Yuanxin Liu, Fandong Meng, Zheng Lin, Pe…☆15Updated 2 years ago
- ☆78Updated 2 years ago
- [ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.☆102Updated 2 years ago
- Code release for Dataless Knowledge Fusion by Merging Weights of Language Models (https://openreview.net/forum?id=FCnohuR6AnM)☆89Updated last year
- DEMix Layers for Modular Language Modeling☆53Updated 3 years ago
- Code for ACL2023 paper: Pre-Training to Learn in Context☆107Updated 11 months ago
- MEND: Fast Model Editing at Scale☆247Updated last year
- Code for the ACL-2022 paper "StableMoE: Stable Routing Strategy for Mixture of Experts"☆47Updated 3 years ago
- Code for "SemDeDup", a simple method for identifying and removing semantic duplicates from a dataset (data pairs which are semantically s…☆138Updated last year
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆196Updated 2 years ago
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆40Updated 2 years ago
- A library for parameter-efficient and composable transfer learning for NLP with sparse fine-tunings.☆74Updated 11 months ago
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆77Updated 2 years ago
- On Transferability of Prompt Tuning for Natural Language Processing☆99Updated last year
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆94Updated 3 years ago
- Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022☆61Updated 3 years ago