Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"
☆1,233Mar 10, 2024Updated 2 years ago
Alternatives and similar repositories for LLM-Adapters
Users that are interested in LLM-Adapters are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- RecAlpaca: A simple framework combing Alpaca and Recommendations.☆35Mar 30, 2023Updated 3 years ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,928Mar 14, 2024Updated 2 years ago
- Code for our Paper "All in an Aggregated Image for In-Image Learning"☆30Apr 9, 2024Updated 2 years ago
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆21,052Updated this week
- A Unified Library for Parameter-Efficient and Modular Transfer Learning☆2,812Apr 26, 2026Updated last week
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Instruct-tune LLaMA on consumer hardware☆18,945Jul 29, 2024Updated last year
- Implementation of paper "Towards a Unified View of Parameter-Efficient Transfer Learning" (ICLR 2022)☆544Mar 24, 2022Updated 4 years ago
- Instruction Tuning with GPT-4☆4,337Jun 11, 2023Updated 2 years ago
- An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.☆8,486Apr 25, 2026Updated last week
- Open Academic Research on Improving LLaMA to SOTA LLM☆1,606Aug 30, 2023Updated 2 years ago
- A framework for few-shot evaluation of language models.☆12,411Updated this week
- A plug-and-play library for parameter-efficient-tuning (Delta Tuning)☆1,042Sep 19, 2024Updated last year
- We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tunin…☆2,799Dec 12, 2023Updated 2 years ago
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,770Aug 4, 2024Updated last year
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆30,264Jul 17, 2024Updated last year
- Train transformer language models with reinforcement learning.☆18,193Apr 28, 2026Updated last week
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,463Updated this week
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆123Apr 28, 2024Updated 2 years ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,899Jun 10, 2024Updated last year
- ⚡LLM Zoo is a project that provides data, models, and evaluation benchmark for large language models.⚡☆2,950Nov 26, 2023Updated 2 years ago
- [ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation☆966Mar 24, 2026Updated last month
- Aligning pretrained language models with instruction data generated by themselves.☆4,595Mar 27, 2023Updated 3 years ago
- [ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.☆2,799Apr 1, 2026Updated last month
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".☆2,104Oct 5, 2023Updated 2 years ago
- Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).☆767Jul 20, 2023Updated 2 years ago
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆13,488Dec 17, 2024Updated last year
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,082Sep 27, 2025Updated 7 months ago
- [NIPS2023] RRHF & Wombat☆808Sep 22, 2023Updated 2 years ago
- ☆277Oct 31, 2023Updated 2 years ago
- Awesome-Low-Rank-Adaptation☆128Oct 13, 2024Updated last year
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & VLM & TIS & vLLM & Ray & Asy…☆9,441Updated this week
- AllenAI's post-training codebase☆3,708Updated this week
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)☆8,284Oct 16, 2024Updated last year
- ☆179Jul 22, 2024Updated last year
- AutoPEFT: Automatic Configuration Search for Parameter-Efficient Fine-Tuning (Zhou et al.; TACL 2024)☆51Mar 17, 2024Updated 2 years ago
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,084Jul 1, 2025Updated 10 months ago
- An Open-source Toolkit for LLM Development☆2,802Jan 13, 2025Updated last year
- Collaborative Training of Large Language Models in an Efficient Way☆420Aug 28, 2024Updated last year
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆670Jul 22, 2024Updated last year