EleutherAI / openwebtext2
☆89Updated 2 years ago
Alternatives and similar repositories for openwebtext2:
Users that are interested in openwebtext2 are comparing it to the libraries listed below
- Python tools for processing the stackexchange data dumps into a text dataset for Language Models☆81Updated last year
- ☆97Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- Techniques used to run BLOOM at inference in parallel☆37Updated 2 years ago
- ☆72Updated last year
- ☆77Updated last year
- ☆97Updated 2 years ago
- Open source library for few shot NLP☆78Updated last year
- Our open source implementation of MiniLMv2 (https://aclanthology.org/2021.findings-acl.188)☆61Updated last year
- Experiments with generating opensource language model assistants☆97Updated last year
- An experimental implementation of the retrieval-enhanced language model☆74Updated 2 years ago
- Tools for managing datasets for governance and training.☆85Updated 2 months ago
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorch☆75Updated 4 years ago
- Pipeline for pulling and processing online language model pretraining data from the web☆177Updated last year
- The pipeline for the OSCAR corpus☆168Updated last year
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale☆154Updated last year
- ☆38Updated 11 months ago
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆71Updated last year
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆58Updated 2 years ago
- ☆75Updated 3 years ago
- A framework for few-shot evaluation of autoregressive language models.☆103Updated last year
- Transformers at any scale☆41Updated last year
- M2D2: A Massively Multi-domain Language Modeling Dataset (EMNLP 2022) by Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer☆55Updated 2 years ago
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆110Updated 2 years ago
- Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.☆180Updated 2 years ago
- ☆67Updated 2 years ago
- Evaluation suite for large-scale language models.☆125Updated 3 years ago
- ☆111Updated 2 years ago
- Plug-and-play Search Interfaces with Pyserini and Hugging Face☆31Updated last year
- Code for the arXiv paper: "LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond"☆59Updated 2 months ago