bigscience-workshop / data_toolingLinks
Tools for managing datasets for governance and training.
☆87Updated 2 weeks ago
Alternatives and similar repositories for data_tooling
Users that are interested in data_tooling are comparing it to the libraries listed below
Sorting:
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Updated 2 years ago
- Pipeline for pulling and processing online language model pretraining data from the web☆177Updated 2 years ago
- ☆72Updated 2 years ago
- Datasets collection and preprocessings framework for NLP extreme multitask learning☆192Updated 6 months ago
- The pipeline for the OSCAR corpus☆176Updated 2 months ago
- ☆102Updated 3 years ago
- ☆65Updated 2 years ago
- ☆132Updated 2 weeks ago
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆74Updated last year
- Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.☆87Updated last year
- A collection of datasets for language model pretraining including scripts for downloading, preprocesssing, and sampling.☆64Updated last year
- Our open source implementation of MiniLMv2 (https://aclanthology.org/2021.findings-acl.188)☆61Updated 2 years ago
- A framework for few-shot evaluation of autoregressive language models.☆106Updated 2 years ago
- Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages -- ACL 2023☆107Updated last year
- Using business-level retrieval system (BM25) with Python in just a few lines.☆31Updated 3 years ago
- ☆89Updated 10 months ago
- ☆78Updated 2 years ago
- Pretraining Efficiently on S2ORC!☆179Updated last year
- A Multilingual Replicable Instruction-Following Model☆96Updated 2 years ago
- ☆94Updated 3 years ago
- Apps built using Inspired Cognition's Critique.☆57Updated 2 years ago
- This project studies the performance and robustness of language models and task-adaptation methods.☆155Updated last year
- ☆52Updated 2 years ago
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆136Updated last year
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆49Updated 2 years ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale☆157Updated 2 years ago
- Open source library for few shot NLP☆78Updated 2 years ago
- Experiments on including metadata such as URLs, timestamps, website descriptions and HTML tags during pretraining.☆31Updated 2 years ago
- Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback☆96Updated 2 years ago
- [TMLR'23] Contrastive Search Is What You Need For Neural Text Generation☆123Updated 2 years ago