pytorch / torchtune
PyTorch native post-training library
☆5,014Updated this week
Alternatives and similar repositories for torchtune:
Users that are interested in torchtune are comparing it to the libraries listed below
- Tools for merging pretrained large language models.☆5,424Updated this week
- A PyTorch native library for large model training☆3,470Updated this week
- SGLang is a fast serving framework for large language models and vision language models.☆12,220Updated this week
- Modeling, training, eval, and inference code for OLMo☆5,401Updated this week
- Go ahead and axolotl questions☆8,882Updated this week
- A framework for few-shot evaluation of language models.☆8,270Updated this week
- Robust recipes to align language models with human and AI preferences☆5,056Updated 4 months ago
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆5,892Updated last week
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆11,810Updated this week
- ☆2,889Updated 6 months ago
- Train transformer language models with reinforcement learning.☆12,591Updated this week
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,299Updated 2 weeks ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆2,861Updated this week
- ☆4,069Updated 9 months ago
- Accessible large language models via k-bit quantization for PyTorch.☆6,818Updated this week
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆2,802Updated 2 weeks ago
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆5,862Updated this week
- verl: Volcano Engine Reinforcement Learning for LLMs☆5,399Updated this week
- AllenAI's post-training codebase☆2,804Updated this week
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,520Updated 4 months ago
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.☆9,502Updated 8 months ago
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,324Updated 10 months ago
- Welcome to the Llama Cookbook! This is your go to guide for Building with Llama: Getting started with Inference, Fine-Tuning, RAG. We als…☆16,483Updated this week
- NanoGPT (124M) in 3 minutes☆2,403Updated this week
- Large Language Model Text Generation Inference☆9,905Updated this week
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆6,830Updated 8 months ago
- Minimalistic large language model 3D-parallelism training☆1,701Updated this week
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,568Updated this week
- Curated list of datasets and tools for post-training.☆2,844Updated last month
- Fast and memory-efficient exact attention☆16,370Updated this week