EleutherAI / the-pileLinks
☆1,606Updated 2 years ago
Alternatives and similar repositories for the-pile
Users that are interested in the-pile are comparing it to the libraries listed below
Sorting:
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.☆1,004Updated last year
- ☆1,548Updated last month
- Expanding natural instructions☆1,019Updated last year
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,787Updated 3 months ago
- Code for "Learning to summarize from human feedback"☆1,047Updated 2 years ago
- ☆1,249Updated last year
- Tools to download and cleanup Common Crawl data☆1,028Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,420Updated last year
- Open clone of OpenAI's unreleased WebText dataset scraper. This version uses pushshift.io files instead of the API for speed.☆737Updated 2 years ago
- ☆2,892Updated last week
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch☆872Updated last year
- Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models☆3,127Updated last year
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways☆823Updated 2 years ago
- Crosslingual Generalization through Multitask Finetuning☆536Updated last year
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,628Updated 4 months ago
- Toolkit for creating, sharing and using natural language prompts.☆2,945Updated last year
- Large-scale pretrained models for goal-directed dialog☆883Updated last year
- Fast Inference Solutions for BLOOM☆565Updated last year
- Cramming the training of a (BERT-type) language model into limited compute.☆1,349Updated last year
- 🤗 Evaluate: A library for easily evaluating machine learning models and datasets.☆2,335Updated 2 weeks ago
- The implementation of DeBERTa☆2,154Updated 2 years ago
- Original Implementation of Prompt Tuning from Lester, et al, 2021☆694Updated 7 months ago
- Mistral: A strong, northwesterly wind: Framework for transparent and accessible large-scale language model training, built with Hugging F…☆574Updated last year
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,709Updated last year
- Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpe…☆436Updated 2 years ago
- ☆1,243Updated 2 years ago
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,503Updated 2 years ago
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)