allenai / wimbd
What's In My Big Data (WIMBD) - a toolkit for analyzing large text datasets
☆218Updated 5 months ago
Alternatives and similar repositories for wimbd:
Users that are interested in wimbd are comparing it to the libraries listed below
- Multilingual Large Language Models Evaluation Benchmark☆123Updated 8 months ago
- This project studies the performance and robustness of language models and task-adaptation methods.☆150Updated 11 months ago
- ☆174Updated 2 years ago
- A framework for few-shot evaluation of autoregressive language models.☆103Updated 2 years ago
- DSIR large-scale data selection framework for language model training☆246Updated last year
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆128Updated last year
- Scalable training for dense retrieval models.☆292Updated 2 months ago
- ☆133Updated 3 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆139Updated 6 months ago
- Datasets collection and preprocessings framework for NLP extreme multitask learning☆180Updated 4 months ago
- Pretraining Efficiently on S2ORC!☆163Updated 6 months ago
- ☆150Updated last year
- Code for Zero-Shot Tokenizer Transfer☆127Updated 3 months ago
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆170Updated 10 months ago
- A Survey on Data Selection for Language Models☆228Updated last week
- LOFT: A 1 Million+ Token Long-Context Benchmark☆192Updated 2 weeks ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆459Updated last year
- Code used for sourcing and cleaning the BigScience ROOTS corpus☆311Updated 2 years ago
- Code and model release for the paper "Task-aware Retrieval with Instructions" by Asai et al.☆162Updated last year
- Organize the Web: Constructing Domains Enhances Pre-Training Data Curation☆47Updated last week
- Benchmarking LLMs with Challenging Tasks from Real Users☆221Updated 6 months ago
- Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.☆180Updated 2 years ago
- Code for "SemDeDup", a simple method for identifying and removing semantic duplicates from a dataset (data pairs which are semantically s…☆135Updated last year
- All available datasets for Instruction Tuning of Large Language Models☆250Updated last year
- Source Code of Paper "GPTScore: Evaluate as You Desire"☆247Updated 2 years ago
- Evaluating LLMs with fewer examples☆151Updated last year
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆171Updated 5 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆255Updated 9 months ago
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"☆342Updated last year
- ☆38Updated last year