huggingface / olm-datasetsLinks
Pipeline for pulling and processing online language model pretraining data from the web
☆177Updated 2 years ago
Alternatives and similar repositories for olm-datasets
Users that are interested in olm-datasets are comparing it to the libraries listed below
Sorting:
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Updated 2 years ago
- ☆72Updated 2 years ago
- Tools for managing datasets for governance and training.☆86Updated last week
- ☆101Updated 2 years ago
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- Datasets collection and preprocessings framework for NLP extreme multitask learning☆188Updated 3 months ago
- The pipeline for the OSCAR corpus☆173Updated last year
- ☆65Updated 2 years ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale☆156Updated last year
- ☆184Updated 2 years ago
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆209Updated last year
- An instruction-based benchmark for text improvements.☆143Updated 2 years ago
- ☆79Updated last year
- A framework for few-shot evaluation of autoregressive language models.☆104Updated 2 years ago
- This repository contains the code for "Generating Datasets with Pretrained Language Models".☆189Updated 4 years ago
- Our open source implementation of MiniLMv2 (https://aclanthology.org/2021.findings-acl.188)☆61Updated 2 years ago
- A Multilingual Dataset for Parsing Realistic Task-Oriented Dialogs☆115Updated 2 years ago
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆74Updated last year
- The original implementation of Min et al. "Nonparametric Masked Language Modeling" (paper https//arxiv.org/abs/2212.01349)☆158Updated 2 years ago
- A Multilingual Replicable Instruction-Following Model☆95Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)☆462Updated 2 years ago
- ☆67Updated 3 years ago
- Used for adaptive human in the loop evaluation of language and embedding models.☆307Updated 2 years ago
- Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.☆85Updated last year
- This project studies the performance and robustness of language models and task-adaptation methods.☆154Updated last year
- ☆66Updated 3 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arx…☆138Updated 2 years ago
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆116Updated 3 months ago
- ARCHIVED. Please use https://docs.adapterhub.ml/huggingface_hub.html || 🔌 A central repository collecting pre-trained adapter modules☆67Updated last year