Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
☆6,084Jul 1, 2025Updated 10 months ago
Alternatives and similar repositories for lit-llama
Users that are interested in lit-llama are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,928Mar 14, 2024Updated 2 years ago
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆13,337Updated this week
- Instruct-tune LLaMA on consumer hardware☆18,945Jul 29, 2024Updated last year
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,531Jul 16, 2023Updated 2 years ago
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆30,264Jul 17, 2024Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,899Jun 10, 2024Updated last year
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,463Updated this week
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆57,469Nov 12, 2025Updated 5 months ago