Lightning-AI / litgptLinks
20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.
☆12,564Updated last week
Alternatives and similar repositories for litgpt
Users that are interested in litgpt are comparing it to the libraries listed below
Sorting:
- Go ahead and axolotl questions☆10,038Updated last week
- DSPy: The framework for programming—not prompting—language models☆26,824Updated this week
- Tools for merging pretrained large language models.☆6,122Updated this week
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,667Updated last year
- PyTorch native post-training library☆5,366Updated this week
- Welcome to the Llama Cookbook! This is your go to guide for Building with Llama: Getting started with Inference, Fine-Tuning, RAG. We als…☆17,686Updated this week
- Robust recipes to align language models with human and AI preferences☆5,289Updated this week
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,081Updated last month
- Large Language Model Text Generation Inference☆10,367Updated last week
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,583Updated last year
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,036Updated 3 months ago
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆6,949Updated last year
- Accessible large language models via k-bit quantization for PyTorch.☆7,400Updated last week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆19,184Updated this week
- Train transformer language models with reinforcement learning.☆14,736Updated this week
- A framework for few-shot evaluation of language models.☆9,706Updated this week
- Structured Outputs☆12,188Updated this week
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.☆9,786Updated last year
- Modeling, training, eval, and inference code for OLMo☆5,822Updated last week
- Gorilla: Training and Evaluating LLMs for Function Calls (Tool Calls)☆12,288Updated this week
- tiktoken is a fast BPE tokeniser for use with OpenAI's models.☆15,221Updated 4 months ago
- Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train Qwen3, Llama 4, DeepSeek-R1, Gemma 3, TTS 2x faster with 70% less VRAM.☆42,983Updated this week
- ☆4,087Updated last year
- ☆2,990Updated 10 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆43,305Updated 7 months ago
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆5,300Updated 4 months ago
- Inference Llama 2 in one file of pure C☆18,597Updated 11 months ago
- Examples in the MLX framework☆7,675Updated last month
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆23,180Updated 11 months ago
- LlamaIndex is the leading framework for building LLM-powered agents over your data.☆43,322Updated last week