adapter-hub / HubLinks
ARCHIVED. Please use https://docs.adapterhub.ml/huggingface_hub.html || ๐ A central repository collecting pre-trained adapter modules
โ68Updated last year
Alternatives and similar repositories for Hub
Users that are interested in Hub are comparing it to the libraries listed below
Sorting:
- โ97Updated 2 years ago
- Code & data for EMNLP 2020 paper "MOCHA: A Dataset for Training and Evaluating Reading Comprehension Metrics".โ16Updated 3 years ago
- โ75Updated 4 years ago
- Cross-lingual GLUEโ49Updated 2 years ago
- A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformationsโ56Updated 2 years ago
- This repository contains the code for "Generating Datasets with Pretrained Language Models".โ188Updated 3 years ago
- Pipeline for pulling and processing online language model pretraining data from the webโ178Updated last year
- โ100Updated 2 years ago
- โ72Updated 2 years ago
- โ68Updated 2 months ago
- XCOPA: A Multilingual Dataset for Causal Commonsense Reasoningโ103Updated 4 years ago
- Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://aโฆโ46Updated 2 years ago
- This repositary hosts my experiments for the project, I did with OffNote Labs.โ10Updated 4 years ago
- Automatic metrics for GEM tasksโ66Updated 2 years ago
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorchโ76Updated 4 years ago
- EMNLP 2021 - CTC: A Unified Framework for Evaluating Natural Language Generationโ97Updated 2 years ago
- A BART version of an open-domain QA model in a closed-book setupโ119Updated 4 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arxโฆโ136Updated last year
- M2D2: A Massively Multi-domain Language Modeling Dataset (EMNLP 2022) by Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyerโ54Updated 2 years ago
- A library for parameter-efficient and composable transfer learning for NLP with sparse fine-tunings.โ74Updated 11 months ago
- Few-shot NLP benchmark for unified, rigorous evalโ91Updated 3 years ago
- ๐ฆฎ Code and pretrained models for Findings of ACL 2022 paper "LaPraDoR: Unsupervised Pretrained Dense Retriever for Zero-Shot Text Retrieโฆโ49Updated 3 years ago
- Code and data to support the paper "PAQ 65 Million Probably-Asked Questions andWhat You Can Do With Them"โ204Updated 3 years ago
- Query-focused summarization dataโ42Updated 2 years ago
- ๐ค Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.โ82Updated 3 years ago
- Dataset from the paper "Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering" (COLING 2022)โ114Updated 2 years ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scaleโ155Updated last year
- โ182Updated 2 years ago
- Official implementation of the paper "IteraTeR: Understanding Iterative Revision from Human-Written Text" (ACL 2022)โ78Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.โ93Updated 2 years ago