A Unified Library for Parameter-Efficient and Modular Transfer Learning
β2,812Apr 26, 2026Updated last week
Alternatives and similar repositories for adapters
Users that are interested in adapters are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Implementation of paper "Towards a Unified View of Parameter-Efficient Transfer Learning" (ICLR 2022)β544Mar 24, 2022Updated 4 years ago
- π€ PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.β21,052Updated this week
- A plug-and-play library for parameter-efficient-tuning (Delta Tuning)β1,042Sep 19, 2024Updated last year
- ARCHIVED. Please use https://docs.adapterhub.ml/huggingface_hub.html || π A central repository collecting pre-trained adapter modulesβ69May 26, 2024Updated last year
- State-of-the-Art Text Embeddingsβ18,615Updated this week
- Deploy to Railway using AI coding agents - Free Credits Offer β’ AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Prefix-Tuning: Optimizing Continuous Prompts for Generationβ962Apr 26, 2024Updated 2 years ago
- β506Oct 25, 2023Updated 2 years ago
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.β32,212Sep 30, 2025Updated 7 months ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parametersβ5,928Mar 14, 2024Updated 2 years ago
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalitiesβ22,114Jan 23, 2026Updated 3 months ago
- Data augmentation for NLPβ4,656Jun 24, 2024Updated last year
- Toolkit for creating, sharing and using natural language prompts.β3,010Oct 23, 2023Updated 2 years ago
- π A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (iβ¦β9,639Updated this week
- Train transformer language models with reinforcement learning.β18,193Updated this week
- Deploy on Railway without the complexity - Free Credits Offer β’ AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- This repository contains the code for "Exploiting Cloze Questions for Few-Shot Text Classification and Natural Language Inference"β1,627Jun 12, 2023Updated 2 years ago
- A modular RL library to fine-tune language models to human preferencesβ2,387Mar 1, 2024Updated 2 years ago
- An Open-Source Framework for Prompt-Learning.β4,860Jul 16, 2024Updated last year
- Library for Knowledge Intensive Language Tasksβ971Mar 31, 2022Updated 4 years ago
- jiant is an nlp toolkitβ1,676Jul 6, 2023Updated 2 years ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)β4,745Jan 8, 2024Updated 2 years ago
- Robust recipes to align language models with human and AI preferencesβ5,593Apr 8, 2026Updated 3 weeks ago
- Accessible large language models via k-bit quantization for PyTorch.β8,168Apr 20, 2026Updated 2 weeks ago
- Must-read papers on prompt-based tuning for pre-trained language models.β4,301Jul 17, 2023Updated 2 years ago
- Wordpress hosting with auto-scaling - Free Trial Offer β’ AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Transformers for Information Retrieval, Text Classification, NER, QA, Language Modelling, Language Generation, T5, Multi-Modal, and Conveβ¦β4,239Aug 25, 2025Updated 8 months ago
- Efficient few-shot learning with Sentence Transformersβ2,724Apr 17, 2026Updated 2 weeks ago
- BertViz: Visualize Attention in Transformer Modelsβ8,035Jan 8, 2026Updated 3 months ago
- [EMNLP 2021] SimCSE: Simple Contrastive Learning of Sentence Embeddings https://arxiv.org/abs/2104.08821β3,651Oct 16, 2024Updated last year
- Plug and Play Language Model implementation. Allows to steer topic and attributes of GPT-2 models.β1,154Feb 20, 2024Updated 2 years ago
- Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"β456Sep 6, 2023Updated 2 years ago
- Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"β6,513Jan 14, 2026Updated 3 months ago
- A framework for few-shot evaluation of language models.β12,411Updated this week
- β131Aug 18, 2022Updated 3 years ago
- Deploy on Railway without the complexity - Free Credits Offer β’ AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"β13,488Dec 17, 2024Updated last year
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"β1,233Mar 10, 2024Updated 2 years ago
- TextAttack π is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocsβ¦β3,409Apr 17, 2026Updated 2 weeks ago
- Pyserini is a Python toolkit for reproducible information retrieval research with sparse and dense representations.β2,051Updated this week
- [ACL 2021] LM-BFF: Better Few-shot Fine-tuning of Language Models https://arxiv.org/abs/2012.15723β729Aug 29, 2022Updated 3 years ago
- Longformer: The Long-Document Transformerβ2,194Feb 8, 2023Updated 3 years ago
- Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuningβ204May 4, 2024Updated 2 years ago