bigscience-workshop / data_toolingLinks
Tools for managing datasets for governance and training.
☆87Updated 2 weeks ago
Alternatives and similar repositories for data_tooling
Users that are interested in data_tooling are comparing it to the libraries listed below
Sorting:
- The pipeline for the OSCAR corpus☆176Updated 2 months ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Updated 2 years ago
- Pipeline for pulling and processing online language model pretraining data from the web☆177Updated 2 years ago
- ☆65Updated 2 years ago
- Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.☆87Updated last year
- ☆72Updated 2 years ago
- Datasets collection and preprocessings framework for NLP extreme multitask learning☆192Updated 6 months ago
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆74Updated last year
- ☆102Updated 3 years ago
- Our open source implementation of MiniLMv2 (https://aclanthology.org/2021.findings-acl.188)☆61Updated 2 years ago
- ☆94Updated 3 years ago
- ☆132Updated 2 weeks ago
- A framework for few-shot evaluation of autoregressive language models.☆106Updated 2 years ago
- Pretraining Efficiently on S2ORC!☆179Updated last year
- Open source library for few shot NLP☆78Updated 2 years ago
- ☆78Updated 2 years ago
- An instruction-based benchmark for text improvements.☆142Updated 3 years ago
- Apps built using Inspired Cognition's Critique.☆57Updated 2 years ago
- A Multilingual Replicable Instruction-Following Model☆96Updated 2 years ago
- [EMNLP'23] Official Code for "FOCUS: Effective Embedding Initialization for Monolingual Specialization of Multilingual Models"☆36Updated 8 months ago
- ☆184Updated 2 years ago
- This repository contains the code for "Generating Datasets with Pretrained Language Models".☆189Updated 4 years ago
- Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages -- ACL 2023☆107Updated last year
- Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.☆105Updated 3 years ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale☆157Updated 2 years ago
- Vocabulary Trimming (VT) is a model compression technique, which reduces a multilingual LM vocabulary to a target language by deleting ir…☆61Updated last year
- multimodal document analysis☆166Updated 2 months ago
- Experiments on including metadata such as URLs, timestamps, website descriptions and HTML tags during pretraining.☆31Updated 2 years ago
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆58Updated 3 years ago
- No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval☆29Updated 3 years ago