EleutherAI / the-pileLinks
☆1,580Updated 2 years ago
Alternatives and similar repositories for the-pile
Users that are interested in the-pile are comparing it to the libraries listed below
Sorting:
- ☆1,529Updated this week
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.☆1,001Updated 11 months ago
- ☆1,227Updated 11 months ago
- Expanding natural instructions☆1,007Updated last year
- Tools to download and cleanup Common Crawl data☆1,017Updated 2 years ago
- ☆2,839Updated last month
- Toolkit for creating, sharing and using natural language prompts.☆2,892Updated last year
- Crosslingual Generalization through Multitask Finetuning☆538Updated 9 months ago
- Code for "Learning to summarize from human feedback"☆1,030Updated last year
- Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models☆3,076Updated 11 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,401Updated last year
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch☆868Updated last year
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,560Updated last month
- Fast Inference Solutions for BLOOM☆564Updated 9 months ago
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,760Updated 3 weeks ago
- Open clone of OpenAI's unreleased WebText dataset scraper. This version uses pushshift.io files instead of the API for speed.☆738Updated 2 years ago
- Original Implementation of Prompt Tuning from Lester, et al, 2021☆686Updated 4 months ago
- Large-scale pretrained models for goal-directed dialog☆872Updated last year
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways☆823Updated 2 years ago
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,447Updated 2 years ago
- ☆1,229Updated 2 years ago
- Mistral: A strong, northwesterly wind: Framework for transparent and accessible large-scale language model training, built with Hugging F…☆573Updated last year
- The implementation of DeBERTa☆2,114Updated last year
- Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpe…☆437Updated 2 years ago
- 🤗 Evaluate: A library for easily evaluating machine learning models and datasets.☆2,254Updated 3 weeks ago
- Crawl BookCorpus☆836Updated last year
- Data and tools for generating and inspecting OLMo pre-training data.☆1,258Updated 2 weeks ago
- A prize for finding tasks that cause large language models to show inverse scaling☆613Updated last year
- Cramming the training of a (BERT-type) language model into limited compute.☆1,338Updated last year
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,319Updated last week