meta-llama / llama-cookbookLinks
Welcome to the Llama Cookbook! This is your go to guide for Building with Llama: Getting started with Inference, Fine-Tuning, RAG. We also show you how to solve end to end problems using Llama model family and using them on various provider services
β17,954Updated this week
Alternatives and similar repositories for llama-cookbook
Users that are interested in llama-cookbook are comparing it to the libraries listed below
Sorting:
- π€ PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.β19,832Updated this week
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"β12,813Updated 10 months ago
- Inference code for Llama modelsβ58,837Updated 8 months ago
- Large Language Model Text Generation Inferenceβ10,580Updated last month
- Inference code for CodeLlama modelsβ16,365Updated last year
- Train transformer language models with reinforcement learning.β15,934Updated this week
- QLoRA: Efficient Finetuning of Quantized LLMsβ10,697Updated last year
- PyTorch native post-training libraryβ5,535Updated last week
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and support state-of-the-art optimizatiβ¦β11,880Updated this week
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.β12,840Updated last week
- Go ahead and axolotl questionsβ10,634Updated this week
- SGLang is a fast serving framework for large language models and vision language models.β18,897Updated this week
- A framework for few-shot evaluation of language models.β10,373Updated this week
- The official Meta Llama 3 GitHub siteβ29,040Updated 8 months ago
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.β23,763Updated last year
- A series of large language models trained from scratch by developers @01-aiβ7,844Updated 10 months ago
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.β39,161Updated 4 months ago
- Accessible large language models via k-bit quantization for PyTorch.β7,659Updated 2 weeks ago
- A high-throughput and memory-efficient inference and serving engine for LLMsβ60,385Updated this week
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.β8,775Updated last year
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.β7,174Updated this week
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.β4,965Updated 6 months ago
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinksβ7,070Updated last year
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMathβ9,454Updated 4 months ago
- Retrieval and Retrieval-augmented LLMsβ10,694Updated last week
- LlamaIndex is the leading framework for building LLM-powered agents over your data.β44,778Updated this week
- Modeling, training, eval, and inference code for OLMoβ6,044Updated this week
- Robust recipes to align language models with human and AI preferencesβ5,398Updated last month
- Official release of InternLM series (InternLM, InternLM2, InternLM2.5, InternLM3).β7,075Updated 2 months ago
- A Next-Generation Training Engine Built for Ultra-Large MoE Modelsβ4,937Updated this week