sileod / tasksource
Datasets collection and preprocessings framework for NLP extreme multitask learning
☆173Updated 3 weeks ago
Alternatives and similar repositories for tasksource:
Users that are interested in tasksource are comparing it to the libraries listed below
- A framework for few-shot evaluation of autoregressive language models.☆102Updated last year
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆124Updated 10 months ago
- What's In My Big Data (WIMBD) - a toolkit for analyzing large text datasets☆204Updated 2 months ago
- ☆124Updated last week
- ☆65Updated last year
- This project studies the performance and robustness of language models and task-adaptation methods.☆142Updated 8 months ago
- Scalable training for dense retrieval models.☆273Updated last year
- Pipeline for pulling and processing online language model pretraining data from the web☆175Updated last year
- ☆73Updated last year
- minimal pytorch implementation of bm25 (with sparse tensors)☆97Updated 10 months ago
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆113Updated 4 months ago
- The GitHub repo for Goal Driven Discovery of Distributional Differences via Language Descriptions☆68Updated last year
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆66Updated 3 months ago
- Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.☆180Updated 2 years ago
- Code and model release for the paper "Task-aware Retrieval with Instructions" by Asai et al.☆162Updated last year
- Inquisitive Parrots for Search☆183Updated 11 months ago
- Code for Zero-Shot Tokenizer Transfer☆121Updated 2 weeks ago
- Finetune mistral-7b-instruct for sentence embeddings☆75Updated 8 months ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆249Updated 6 months ago
- ☆182Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated last year
- The original implementation of Min et al. "Nonparametric Masked Language Modeling" (paper https//arxiv.org/abs/2212.01349)☆157Updated 2 years ago
- Our open source implementation of MiniLMv2 (https://aclanthology.org/2021.findings-acl.188)☆60Updated last year
- Experiments with generating opensource language model assistants☆97Updated last year
- A Multilingual Replicable Instruction-Following Model☆94Updated last year
- ☆116Updated 3 months ago
- Pretraining Efficiently on S2ORC!☆149Updated 3 months ago
- ☆137Updated 9 months ago
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆207Updated last year