The hub for EleutherAI's work on interpretability and learning dynamics
☆2,745Nov 15, 2025Updated 4 months ago
Alternatives and similar repositories for pythia
Users that are interested in pythia are comparing it to the libraries listed below
Sorting:
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries☆7,400Feb 3, 2026Updated last month
- A framework for few-shot evaluation of language models.☆11,704Mar 5, 2026Updated 2 weeks ago
- The RedPajama-Data repository contains code for preparing large datasets for training large language models.☆4,929Dec 7, 2024Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,522Aug 13, 2024Updated last year
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,739Jan 8, 2024Updated 2 years ago
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,537Jul 16, 2023Updated 2 years ago
- Modeling, training, eval, and inference code for OLMo☆6,404Nov 24, 2025Updated 3 months ago
- Train transformer language models with reinforcement learning.☆17,697Updated this week
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,428Jun 2, 2025Updated 9 months ago
- Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models☆3,219Jul 19, 2024Updated last year
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,478Jun 7, 2025Updated 9 months ago
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,809Updated this week
- Ongoing research training transformer models at scale☆15,744Updated this week
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,827Jun 17, 2025Updated 9 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,858Jun 10, 2024Updated last year
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,419Mar 5, 2026Updated 2 weeks ago
- Instruct-tune LLaMA on consumer hardware☆18,961Jul 29, 2024Updated last year
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆30,258Jul 17, 2024Updated last year
- AllenAI's post-training codebase☆3,629Updated this week
- Aligning pretrained language models with instruction data generated by themselves.☆4,587Mar 27, 2023Updated 2 years ago
- Accessible large language models via k-bit quantization for PyTorch.☆8,052Updated this week
- Fast and memory-efficient exact attention☆22,832Updated this week
- A library for mechanistic interpretability of GPT-style language models☆3,183Mar 13, 2026Updated last week
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,922May 3, 2024Updated last year
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,931Mar 14, 2024Updated 2 years ago
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,769Aug 4, 2024Updated last year
- ☆1,560Updated this week
- Robust recipes to align language models with human and AI preferences☆5,527Sep 8, 2025Updated 6 months ago
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,082Jul 1, 2025Updated 8 months ago
- Toolkit for creating, sharing and using natural language prompts.☆3,006Oct 23, 2023Updated 2 years ago
- Minimalistic large language model 3D-parallelism training☆2,617Feb 19, 2026Updated last month
- Instruction Tuning with GPT-4☆4,338Jun 11, 2023Updated 2 years ago
- Tools for merging pretrained large language models.☆6,867Mar 15, 2026Updated last week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆22,046Jan 23, 2026Updated last month
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆41,869Updated this week
- A modular RL library to fine-tune language models to human preferences☆2,382Mar 1, 2024Updated 2 years ago
- Repo for external large-scale work☆6,542Apr 27, 2024Updated last year
- Let ChatGPT teach your own chatbot in hours with a single GPU!☆3,164Mar 17, 2024Updated 2 years ago
- ☆2,952Mar 9, 2026Updated last week