Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"
☆1,229Mar 10, 2024Updated 2 years ago
Alternatives and similar repositories for LLM-Adapters
Users that are interested in LLM-Adapters are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- RecAlpaca: A simple framework combing Alpaca and Recommendations.☆34Mar 30, 2023Updated 2 years ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,932Mar 14, 2024Updated 2 years ago
- Code for our Paper "All in an Aggregated Image for In-Image Learning"☆30Apr 9, 2024Updated last year
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,841Mar 18, 2026Updated last week
- A Unified Library for Parameter-Efficient and Modular Transfer Learning☆2,804Updated this week
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Instruct-tune LLaMA on consumer hardware☆18,961Jul 29, 2024Updated last year
- Implementation of paper "Towards a Unified View of Parameter-Efficient Transfer Learning" (ICLR 2022)☆543Mar 24, 2022Updated 4 years ago
- Instruction Tuning with GPT-4☆4,337Jun 11, 2023Updated 2 years ago
- An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.☆8,493Updated this week
- Open Academic Research on Improving LLaMA to SOTA LLM☆1,610Aug 30, 2023Updated 2 years ago
- A framework for few-shot evaluation of language models.☆11,802Mar 18, 2026Updated last week
- A plug-and-play library for parameter-efficient-tuning (Delta Tuning)☆1,041Sep 19, 2024Updated last year
- We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tunin…☆2,801Dec 12, 2023Updated 2 years ago
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,769Aug 4, 2024Updated last year
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆30,258Jul 17, 2024Updated last year
- Train transformer language models with reinforcement learning.☆17,781Updated this week
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,445Jun 2, 2025Updated 9 months ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆124Apr 28, 2024Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,858Jun 10, 2024Updated last year
- ⚡LLM Zoo is a project that provides data, models, and evaluation benchmark for large language models.⚡☆2,952Nov 26, 2023Updated 2 years ago
- [ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation☆946Oct 1, 2024Updated last year
- Aligning pretrained language models with instruction data generated by themselves.☆4,587Mar 27, 2023Updated 2 years ago
- [ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.☆2,745Mar 4, 2026Updated 3 weeks ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".☆2,101Oct 5, 2023Updated 2 years ago
- Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).☆767Jul 20, 2023Updated 2 years ago
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆13,351Dec 17, 2024Updated last year
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,077Sep 27, 2025Updated 5 months ago
- [NIPS2023] RRHF & Wombat☆808Sep 22, 2023Updated 2 years ago
- ☆274Oct 31, 2023Updated 2 years ago
- Awesome-Low-Rank-Adaptation☆127Oct 13, 2024Updated last year
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,231Updated this week
- AllenAI's post-training codebase☆3,643Updated this week
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)☆8,286Oct 16, 2024Updated last year
- ☆177Jul 22, 2024Updated last year
- AutoPEFT: Automatic Configuration Search for Parameter-Efficient Fine-Tuning (Zhou et al.; TACL 2024)☆51Mar 17, 2024Updated 2 years ago
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,083Jul 1, 2025Updated 8 months ago
- An Open-source Toolkit for LLM Development☆2,806Jan 13, 2025Updated last year
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆668Jul 22, 2024Updated last year
- Collaborative Training of Large Language Models in an Efficient Way☆419Aug 28, 2024Updated last year