bigscience-workshop / data_tooling
Tools for managing datasets for governance and training.
☆85Updated 3 months ago
Alternatives and similar repositories for data_tooling
Users that are interested in data_tooling are comparing it to the libraries listed below
Sorting:
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- ☆72Updated last year
- Pipeline for pulling and processing online language model pretraining data from the web☆177Updated last year
- ☆97Updated 2 years ago
- The pipeline for the OSCAR corpus☆167Updated last year
- Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.☆80Updated 8 months ago
- A Multilingual Replicable Instruction-Following Model☆93Updated last year
- ☆65Updated last year
- Open source library for few shot NLP☆78Updated last year
- A framework for few-shot evaluation of autoregressive language models.☆103Updated 2 years ago
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆71Updated last year
- Our open source implementation of MiniLMv2 (https://aclanthology.org/2021.findings-acl.188)☆61Updated last year
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- ☆38Updated last year
- ☆97Updated 2 years ago
- Apps built using Inspired Cognition's Critique.☆58Updated 2 years ago
- Datasets collection and preprocessings framework for NLP extreme multitask learning☆180Updated 4 months ago
- Dense hybrid representations for text retrieval☆62Updated 2 years ago
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆58Updated 2 years ago
- MAFAND-MT☆55Updated 10 months ago
- A collection of datasets for language model pretraining including scripts for downloading, preprocesssing, and sampling.☆59Updated 9 months ago
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆76Updated 6 months ago
- Pretraining Efficiently on S2ORC!☆163Updated 6 months ago
- Efficient Language Model Training through Cross-Lingual and Progressive Transfer Learning☆30Updated 2 years ago
- Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback☆95Updated last year
- ☆27Updated 2 months ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆48Updated last year
- This repository contains the code for "Generating Datasets with Pretrained Language Models".☆188Updated 3 years ago
- No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval☆29Updated 2 years ago
- ☆90Updated 5 months ago