EleutherAI / pythia
The hub for EleutherAI's work on interpretability and learning dynamics
☆2,282Updated 2 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for pythia
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,409Updated 3 months ago
- Alpaca dataset from Stanford, cleaned and curated☆1,519Updated last year
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,502Updated 10 months ago
- General technology for enabling AI capabilities w/ LLMs and MLLMs☆3,699Updated last month
- Aligning pretrained language models with instruction data generated by themselves.☆4,164Updated last year
- Holistic Evaluation of Language Models (HELM), a framework to increase the transparency of language models (https://arxiv.org/abs/2211.09…☆1,948Updated this week
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,624Updated last year
- ☆2,686Updated this week
- The RedPajama-Data repository contains code for preparing large datasets for training large language models.☆4,571Updated last month
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆1,667Updated last year
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,579Updated 3 months ago
- ☆1,474Updated 3 weeks ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,353Updated 7 months ago
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,045Updated 10 months ago
- Toolkit for creating, sharing and using natural language prompts.☆2,700Updated last year
- ☆1,271Updated this week
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,216Updated last year
- Implementation of Toolformer, Language Models That Can Use Tools, by MetaAI☆1,970Updated 3 months ago
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆1,941Updated 7 months ago
- Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models☆2,871Updated 4 months ago
- A modular RL library to fine-tune language models to human preferences☆2,213Updated 8 months ago
- Data and tools for generating and inspecting OLMo pre-training data.☆993Updated this week
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆5,994Updated 2 months ago
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆812Updated last year
- Accessible large language models via k-bit quantization for PyTorch.☆6,299Updated this week
- 4 bits quantization of LLaMA using GPTQ☆2,998Updated 4 months ago
- LOMO: LOw-Memory Optimization☆979Updated 4 months ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,497Updated last month
- Fast & Simple repository for pre-training and fine-tuning T5-style models☆970Updated 3 months ago