A Unified Library for Parameter-Efficient and Modular Transfer Learning
β2,810Mar 21, 2026Updated 3 weeks ago
Alternatives and similar repositories for adapters
Users that are interested in adapters are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Implementation of paper "Towards a Unified View of Parameter-Efficient Transfer Learning" (ICLR 2022)β544Mar 24, 2022Updated 4 years ago
- π€ PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.β20,929Updated this week
- A plug-and-play library for parameter-efficient-tuning (Delta Tuning)β1,041Sep 19, 2024Updated last year
- ARCHIVED. Please use https://docs.adapterhub.ml/huggingface_hub.html || π A central repository collecting pre-trained adapter modulesβ69May 26, 2024Updated last year
- State-of-the-Art Text Embeddingsβ18,534Updated this week
- Simple, predictable pricing with DigitalOcean hosting β’ AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- Prefix-Tuning: Optimizing Continuous Prompts for Generationβ961Apr 26, 2024Updated last year
- β506Oct 25, 2023Updated 2 years ago
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.β32,201Sep 30, 2025Updated 6 months ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parametersβ5,928Mar 14, 2024Updated 2 years ago
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalitiesβ22,086Jan 23, 2026Updated 2 months ago
- Data augmentation for NLPβ4,656Jun 24, 2024Updated last year
- Toolkit for creating, sharing and using natural language prompts.β3,007Oct 23, 2023Updated 2 years ago
- π A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (iβ¦β9,596Apr 2, 2026Updated last week
- Train transformer language models with reinforcement learning.β17,967Apr 7, 2026Updated last week
- Managed hosting for WordPress and PHP on Cloudways β’ AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- This repository contains the code for "Exploiting Cloze Questions for Few-Shot Text Classification and Natural Language Inference"β1,627Jun 12, 2023Updated 2 years ago
- A modular RL library to fine-tune language models to human preferencesβ2,384Mar 1, 2024Updated 2 years ago
- An Open-Source Framework for Prompt-Learning.β4,850Jul 16, 2024Updated last year
- Library for Knowledge Intensive Language Tasksβ972Mar 31, 2022Updated 4 years ago
- jiant is an nlp toolkitβ1,676Jul 6, 2023Updated 2 years ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)β4,743Jan 8, 2024Updated 2 years ago
- Robust recipes to align language models with human and AI preferencesβ5,558Updated this week
- Accessible large language models via k-bit quantization for PyTorch.β8,107Updated this week
- Must-read papers on prompt-based tuning for pre-trained language models.β4,298Jul 17, 2023Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient β’ AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Transformers for Information Retrieval, Text Classification, NER, QA, Language Modelling, Language Generation, T5, Multi-Modal, and Conveβ¦β4,239Aug 25, 2025Updated 7 months ago
- Efficient few-shot learning with Sentence Transformersβ2,710Apr 2, 2026Updated last week
- BertViz: Visualize Attention in Transformer Modelsβ7,988Jan 8, 2026Updated 3 months ago
- [EMNLP 2021] SimCSE: Simple Contrastive Learning of Sentence Embeddings https://arxiv.org/abs/2104.08821β3,649Oct 16, 2024Updated last year
- Plug and Play Language Model implementation. Allows to steer topic and attributes of GPT-2 models.β1,154Feb 20, 2024Updated 2 years ago
- Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"β456Sep 6, 2023Updated 2 years ago
- A framework for few-shot evaluation of language models.β12,138Updated this week
- Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"β6,503Jan 14, 2026Updated 3 months ago
- β131Aug 18, 2022Updated 3 years ago
- 1-Click AI Models by DigitalOcean Gradient β’ AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"β13,411Dec 17, 2024Updated last year
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"β1,231Mar 10, 2024Updated 2 years ago
- TextAttack π is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocsβ¦β3,400Jul 10, 2025Updated 9 months ago
- Pyserini is a Python toolkit for reproducible information retrieval research with sparse and dense representations.β2,046Updated this week
- [ACL 2021] LM-BFF: Better Few-shot Fine-tuning of Language Models https://arxiv.org/abs/2012.15723β730Aug 29, 2022Updated 3 years ago
- Longformer: The Long-Document Transformerβ2,190Feb 8, 2023Updated 3 years ago
- Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuningβ203May 4, 2024Updated last year