huggingface / peft
π€ PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
β17,069Updated this week
Alternatives and similar repositories for peft:
Users that are interested in peft are comparing it to the libraries listed below
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"β11,177Updated last month
- Train transformer language models with reinforcement learning.β10,781Updated this week
- QLoRA: Efficient Finetuning of Quantized LLMsβ10,179Updated 7 months ago
- Fast and memory-efficient exact attentionβ15,179Updated last week
- Accessible large language models via k-bit quantization for PyTorch.β6,557Updated this week
- π A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (iβ¦β8,208Updated this week
- Large Language Model Text Generation Inferenceβ9,646Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMsβ34,902Updated this week
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.β21,204Updated 5 months ago
- Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)β38,807Updated last week
- Welcome to the Llama Cookbook! This is your go to guide for Building with Llama: Getting started with Inference, Fine-Tuning, RAG. We alsβ¦β16,042Updated this week
- Aligning pretrained language models with instruction data generated by themselves.β4,247Updated last year
- Retrieval and Retrieval-augmented LLMsβ8,313Updated this week
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.β37,577Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.β36,356Updated this week
- tiktoken is a fast BPE tokeniser for use with OpenAI's models.β13,142Updated 3 months ago
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)β¦β13,039Updated this week
- A framework for few-shot evaluation of language models.β7,562Updated this week
- Instruct-tune LLaMA on consumer hardwareβ18,777Updated 5 months ago
- An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.β8,335Updated this week
- Example models using DeepSpeedβ6,237Updated this week
- OpenLLaMA, a permissively licensed open source reproduction of Meta AIβs LLaMA 7B trained on the RedPajama datasetβ7,421Updated last year
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.β4,646Updated last week
- Code and documentation to train Stanford's Alpaca models, and generate the data.β29,758Updated 6 months ago
- Ongoing research training transformer models at scaleβ11,192Updated this week
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinksβ6,770Updated 6 months ago
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adβ¦β6,022Updated 4 months ago
- Inference code for Llama modelsβ57,327Updated 5 months ago
- State-of-the-Art Text Embeddingsβ15,845Updated this week
- Finetune Llama 3.3, Mistral, Phi-4, Qwen 2.5 & Gemma LLMs 2-5x faster with 70% less memoryβ21,215Updated this week