EleutherAI / the-pile
☆1,550Updated last year
Alternatives and similar repositories for the-pile:
Users that are interested in the-pile are comparing it to the libraries listed below
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.☆990Updated 8 months ago
- Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models☆3,011Updated 8 months ago
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,713Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,382Updated last year
- ☆1,510Updated 2 weeks ago
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,441Updated 3 weeks ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways☆820Updated 2 years ago
- Fast Inference Solutions for BLOOM☆561Updated 5 months ago
- ☆1,200Updated 8 months ago
- ☆2,781Updated last week
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch☆861Updated last year
- Expanding natural instructions☆987Updated last year
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,373Updated last year
- Code for "Learning to summarize from human feedback"☆1,018Updated last year
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,073Updated last year
- Tools to download and cleanup Common Crawl data☆996Updated last year
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,615Updated last year
- A modular RL library to fine-tune language models to human preferences☆2,296Updated last year
- Code for the paper Fine-Tuning Language Models from Human Preferences☆1,315Updated last year
- Cramming the training of a (BERT-type) language model into limited compute.☆1,326Updated 9 months ago
- A collection of open-source dataset to train instruction-following LLMs (ChatGPT,LLaMA,Alpaca)☆1,113Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆1,998Updated 2 weeks ago
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)☆464Updated 2 years ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,466Updated 7 months ago
- Crosslingual Generalization through Multitask Finetuning☆530Updated 6 months ago
- Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpe…☆437Updated last year
- Toolkit for creating, sharing and using natural language prompts.☆2,814Updated last year
- ☆1,243Updated last year
- Open clone of OpenAI's unreleased WebText dataset scraper. This version uses pushshift.io files instead of the API for speed.☆729Updated 2 years ago
- distributed trainer for LLMs☆569Updated 10 months ago