unicamp-dl / Lite-T5-TranslationLinks
β28Updated last year
Alternatives and similar repositories for Lite-T5-Translation
Users that are interested in Lite-T5-Translation are comparing it to the libraries listed below
Sorting:
- Code for training and evaluating T5 on Portuguese data.β91Updated 3 years ago
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 3 years ago
- Portuguese translation of the GLUE benchmark and Scitail datasetβ32Updated 3 years ago
- Research code for the paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models"β27Updated 4 years ago
- A software for transferring pre-trained English models to foreign languagesβ19Updated 2 years ago
- Pytorch Implementation of EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasksβ63Updated 3 years ago
- As good as new. How to successfully recycle English GPT-2 to make models for other languages (ACL Findings 2021)β48Updated 4 years ago
- MFAQ: a Multilingual FAQ Datasetβ18Updated 2 years ago
- β12Updated 3 years ago
- β27Updated 9 months ago
- This dataset contains human judgements about answer equivalence. The data is based on SQuAD (Stanford Question Answering Dataset), and coβ¦β27Updated 3 years ago
- A multilingual version of MS MARCO passage ranking datasetβ145Updated 2 years ago
- A question-answering dataset with a focus on subjective informationβ48Updated last year
- BLOOM+1: Adapting BLOOM model to support a new unseen languageβ74Updated last year
- Code for equipping pretrained language models (BART, GPT-2, XLNet) with commonsense knowledge for generating implicit knowledge statementβ¦β15Updated 4 years ago
- Long-context pretrained encoder-decoder modelsβ96Updated 3 years ago
- No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrievalβ29Updated 3 years ago
- Code & data for EMNLP 2020 paper "MOCHA: A Dataset for Training and Evaluating Reading Comprehension Metrics".β16Updated 3 years ago
- This is the official implementation of NeurIPS 2021 "One Question Answering Model for Many Languages with Cross-lingual Dense Passage Retβ¦β71Updated 3 years ago
- β68Updated 7 months ago
- ARCHIVED. Please use https://docs.adapterhub.ml/huggingface_hub.html || π A central repository collecting pre-trained adapter modulesβ68Updated last year
- Multilingual abstractive summarization dataset extracted from WikiHow.β96Updated 8 months ago
- Pretraining scripts for BART transformer modelβ12Updated 2 years ago
- Evaluation and baseline scripts for the ASSIN shared task.β11Updated 6 years ago
- A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformationsβ57Updated 3 years ago
- Simple Questions Generate Named Entity Recognition Datasets (EMNLP 2022)β76Updated 2 years ago
- β13Updated 3 years ago
- M2D2: A Massively Multi-domain Language Modeling Dataset (EMNLP 2022) by Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyerβ54Updated 3 years ago
- β39Updated 2 years ago
- Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.β105Updated 3 years ago