bigscience-workshop / data-preparationLinks
Code used for sourcing and cleaning the BigScience ROOTS corpus
☆317Updated 2 years ago
Alternatives and similar repositories for data-preparation
Users that are interested in data-preparation are comparing it to the libraries listed below
Sorting:
- Inspired by google c4, here is a series of colossal clean data cleaning scripts focused on CommonCrawl data processing. Including Chinese…☆134Updated 2 years ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆225Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated 2 years ago
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆391Updated last year
- DSIR large-scale data selection framework for language model training☆266Updated last year
- All-in-one text de-duplication☆736Updated 3 months ago
- Pytorch implementation of DoReMi, a method for optimizing the data mixture weights in language modeling datasets☆350Updated last year
- Codebase for RetroMAE and beyond.☆268Updated last year
- A repository sharing the literatures about long-context large language models, including the methodologies and the evaluation benchmarks☆269Updated last year
- A Multi-Turn Dialogue Corpus based on Alpaca Instructions☆177Updated 2 years ago
- Crosslingual Generalization through Multitask Finetuning☆537Updated last year
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆406Updated last year
- Datasets for Instruction Tuning of Large Language Models☆260Updated 2 years ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆478Updated last year
- Naive Bayes-based Context Extension☆325Updated last year
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆284Updated 2 years ago
- Collaborative Training of Large Language Models in an Efficient Way☆416Updated last year
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆138Updated 7 months ago
- ☆105Updated 2 years ago
- Fast Inference Solutions for BLOOM☆565Updated last year
- A large-scale, fine-grained, diverse preference dataset (and models).☆357Updated last year
- Collection of training data management explorations for large language models☆336Updated last year
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆579Updated last year
- https://acl2023-retrieval-lm.github.io/☆157Updated 2 years ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆551Updated last year
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆210Updated last year
- distributed trainer for LLMs☆584Updated last year
- ☆458Updated last year
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆116Updated 2 years ago
- ☆181Updated 2 years ago