huggingface / peftLinks
๐ค PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
โ18,820Updated this week
Alternatives and similar repositories for peft
Users that are interested in peft are comparing it to the libraries listed below
Sorting:
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"โ12,119Updated 6 months ago
- QLoRA: Efficient Finetuning of Quantized LLMsโ10,504Updated last year
- Train transformer language models with reinforcement learning.โ14,281Updated this week
- Accessible large language models via k-bit quantization for PyTorch.โ7,150Updated this week
- Fast and memory-efficient exact attentionโ17,952Updated last week
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.โ22,843Updated 10 months ago
- ๐ A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (iโฆโ8,860Updated this week
- Instruct-tune LLaMA on consumer hardwareโ18,918Updated 10 months ago
- Large Language Model Text Generation Inferenceโ10,249Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.โ39,075Updated this week
- Ongoing research training transformer models at scaleโ12,641Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMsโ50,358Updated this week
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.โ38,751Updated 3 weeks ago
- A framework for few-shot evaluation of language models.โ9,326Updated this week
- LAVIS - A One-stop Library for Language-Vision Intelligenceโ10,654Updated 7 months ago
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalitiesโ21,420Updated 3 weeks ago
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)โฆโ13,722Updated last week
- Example models using DeepSpeedโ6,540Updated this week
- Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)โ52,785Updated this week
- Go ahead and axolotl questionsโ9,715Updated this week
- Code and documentation to train Stanford's Alpaca models, and generate the data.โ30,043Updated 11 months ago
- Retrieval and Retrieval-augmented LLMsโ9,973Updated 3 weeks ago
- Aligning pretrained language models with instruction data generated by themselves.โ4,396Updated 2 years ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parametersโ5,883Updated last year
- tiktoken is a fast BPE tokeniser for use with OpenAI's models.โ14,886Updated 3 months ago
- a state-of-the-art-level open visual language model | ๅคๆจกๆ้ข่ฎญ็ปๆจกๅโ6,596Updated last year
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.โ12,293Updated last week
- PyTorch native post-training libraryโ5,287Updated this week
- Latest Advances on Multimodal Large Language Modelsโ15,578Updated last week
- The official repo of Qwen (้ไนๅ้ฎ) chat & pretrained large language model proposed by Alibaba Cloud.โ18,538Updated last week