EleutherAI / the-pileLinks
☆1,636Updated 2 years ago
Alternatives and similar repositories for the-pile
Users that are interested in the-pile are comparing it to the libraries listed below
Sorting:
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.☆1,008Updated last year
- Code for "Learning to summarize from human feedback"☆1,059Updated 2 years ago
- ☆1,560Updated last week
- Open clone of OpenAI's unreleased WebText dataset scraper. This version uses pushshift.io files instead of the API for speed.☆752Updated 3 years ago
- ☆1,257Updated last year
- Expanding natural instructions☆1,030Updated 2 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways☆828Updated 3 years ago
- Tools to download and cleanup Common Crawl data☆1,038Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,433Updated last year
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch☆879Updated 2 years ago
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,731Updated 2 months ago
- Fast Inference Solutions for BLOOM☆566Updated last year
- Toolkit for creating, sharing and using natural language prompts.☆2,997Updated 2 years ago
- ☆2,948Updated 3 weeks ago
- Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models☆3,199Updated last year
- Crosslingual Generalization through Multitask Finetuning☆537Updated last year
- Original Implementation of Prompt Tuning from Lester, et al, 2021☆698Updated 11 months ago
- Cramming the training of a (BERT-type) language model into limited compute.☆1,361Updated last year
- Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpe…☆434Updated 2 years ago
- Code for the paper Fine-Tuning Language Models from Human Preferences☆1,377Updated 2 years ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,742Updated 2 years ago
- Alpaca dataset from Stanford, cleaned and curated☆1,581Updated 2 years ago
- Mistral: A strong, northwesterly wind: Framework for transparent and accessible large-scale language model training, built with Hugging F…☆577Updated 2 years ago
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)☆465Updated 3 years ago
- ☆1,257Updated 3 years ago
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,814Updated 7 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,514Updated last year
- Open-source pre-training implementation of Google's LaMDA in PyTorch. Adding RLHF similar to ChatGPT.☆470Updated last year
- Parallelformers: An Efficient Model Parallelization Toolkit for Deployment☆791Updated 2 years ago
- Crawl BookCorpus☆852Updated 2 years ago