EleutherAI / the-pileLinks
☆1,600Updated 2 years ago
Alternatives and similar repositories for the-pile
Users that are interested in the-pile are comparing it to the libraries listed below
Sorting:
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.☆1,006Updated last year
- ☆1,537Updated 3 weeks ago
- Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models☆3,113Updated last year
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch☆871Updated last year
- ☆2,880Updated last week
- Code for "Learning to summarize from human feedback"☆1,041Updated 2 years ago
- Expanding natural instructions☆1,018Updated last year
- Tools to download and cleanup Common Crawl data☆1,026Updated 2 years ago
- ☆1,240Updated last year
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,614Updated 3 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,415Updated last year
- Open clone of OpenAI's unreleased WebText dataset scraper. This version uses pushshift.io files instead of the API for speed.☆736Updated 2 years ago
- Crosslingual Generalization through Multitask Finetuning☆535Updated 11 months ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways☆824Updated 2 years ago
- Fast Inference Solutions for BLOOM☆565Updated 11 months ago
- Cramming the training of a (BERT-type) language model into limited compute.☆1,348Updated last year
- Crawl BookCorpus☆844Updated 2 years ago
- Toolkit for creating, sharing and using natural language prompts.☆2,927Updated last year
- Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpe…☆437Updated 2 years ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,707Updated last year
- Original Implementation of Prompt Tuning from Lester, et al, 2021☆694Updated 6 months ago
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,782Updated 2 months ago
- Open-source pre-training implementation of Google's LaMDA in PyTorch. Adding RLHF similar to ChatGPT.☆472Updated last year
- The implementation of DeBERTa☆2,146Updated last year
- Large-scale pretrained models for goal-directed dialog☆879Updated last year
- Code for the paper Fine-Tuning Language Models from Human Preferences☆1,361Updated 2 years ago
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)☆462Updated 2 years ago
- A modular RL library to fine-tune language models to human preferences☆2,347Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,496Updated last year
- Mistral: A strong, northwesterly wind: Framework for transparent and accessible large-scale language model training, built with Hugging F…☆576Updated last year