☆130Aug 18, 2022Updated 3 years ago
Alternatives and similar repositories for compacter
Users that are interested in compacter are comparing it to the libraries listed below
Sorting:
- ☆156Aug 24, 2021Updated 4 years ago
- Zero-shot Learning by Generating Task-specific Adapters☆14Apr 2, 2021Updated 4 years ago
- Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022☆63Mar 23, 2022Updated 3 years ago
- Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"☆456Sep 6, 2023Updated 2 years ago
- [ICCV23] Official implementation of eP-ALM: Efficient Perceptual Augmentation of Language Models.☆27Oct 27, 2023Updated 2 years ago
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆98Apr 26, 2023Updated 2 years ago
- CIFAR-10-Warehouse: Towards Broad and More Realistic Testbeds in Model Generalization Analysis☆18Jul 15, 2024Updated last year
- Implementation of paper "Towards a Unified View of Parameter-Efficient Transfer Learning" (ICLR 2022)☆543Mar 24, 2022Updated 3 years ago
- On the Effectiveness of Parameter-Efficient Fine-Tuning☆38Nov 4, 2023Updated 2 years ago
- codebase for the SIMAT dataset and evaluation☆38Feb 16, 2022Updated 4 years ago
- ☆11Jun 23, 2022Updated 3 years ago
- VaLM: Visually-augmented Language Modeling. ICLR 2023.☆56Mar 6, 2023Updated 2 years ago
- Parameter Efficient Transfer Learning with Diff Pruning☆75Feb 3, 2021Updated 5 years ago
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆143Sep 4, 2022Updated 3 years ago
- Lite Self-Training☆30Jul 25, 2023Updated 2 years ago
- ☆25Mar 4, 2022Updated 3 years ago
- ACL 2021: HiTransformer☆13May 29, 2021Updated 4 years ago
- A Unified Library for Parameter-Efficient and Modular Transfer Learning☆2,801Oct 12, 2025Updated 4 months ago
- A plug-and-play library for parameter-efficient-tuning (Delta Tuning)☆1,039Sep 19, 2024Updated last year
- Staged Training for Transformer Language Models☆33Mar 31, 2022Updated 3 years ago
- Code associated with the paper "Entropy-based Attention Regularization Frees Unintended Bias Mitigation from Lists"☆50May 31, 2022Updated 3 years ago
- Code for the Findings of NAACL 2022(Long Paper): AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NL…☆18May 4, 2022Updated 3 years ago
- Code base for the EMNLP 2021 Findings paper: Cartography Active Learning☆14Jun 3, 2025Updated 8 months ago
- [EMNLP 2021] Efficient Contrastive Learning via Novel Data Augmentation and Curriculum Learning☆17Jun 28, 2025Updated 8 months ago
- If CLIP Could Talk: Understanding Vision-Language Model Representations Through Their Preferred Concept Descriptions☆17Apr 4, 2024Updated last year
- AutoPEFT: Automatic Configuration Search for Parameter-Efficient Fine-Tuning (Zhou et al.; TACL 2024)☆51Mar 17, 2024Updated last year
- ☆22May 3, 2022Updated 3 years ago
- [NeurIPS 2023] Official Pytorch code for LOVM: Language-Only Vision Model Selection☆21Feb 3, 2024Updated 2 years ago
- ☆77Apr 29, 2024Updated last year
- Pile Deduplication Code☆18May 15, 2023Updated 2 years ago
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆99Sep 5, 2021Updated 4 years ago
- ☆73Jun 3, 2022Updated 3 years ago
- EMNLP 2021 - Frustratingly Simple Pretraining Alternatives to Masked Language Modeling☆34Nov 21, 2021Updated 4 years ago
- An example of how to use spaCy for extremely large files without running into memory issues☆36Sep 17, 2022Updated 3 years ago
- Extended Intramodal and Intermodal Semantic Similarity Judgments for MS-COCO☆54Sep 3, 2020Updated 5 years ago
- ☆54May 8, 2023Updated 2 years ago
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Jul 13, 2022Updated 3 years ago
- Codes and files for the paper Are Emergent Abilities in Large Language Models just In-Context Learning☆33Jan 9, 2025Updated last year
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆147Jul 26, 2021Updated 4 years ago