unicamp-dl / Lite-T5-Translation
☆28Updated 9 months ago
Related projects ⓘ
Alternatives and complementary repositories for Lite-T5-Translation
- Code for training and evaluating T5 on Portuguese data.☆85Updated last year
- SemClinBr - a multi-institutional and multi-specialty semantically annotated corpus for Portuguese clinical NLP tasks☆25Updated 7 months ago
- Transformer model for Portuguese language (Brazil pt_BR)☆15Updated 6 months ago
- PorSimplesSent - A Portuguese corpus of aligned sentences pairs to investigate sentence readability assessment☆10Updated 4 years ago
- A Natural Portuguese Language Benchmark (Napolab) for the evaluation of language models.☆64Updated 2 months ago
- Portuguese translation of the GLUE benchmark and Scitail dataset☆28Updated 2 years ago
- ☆10Updated 2 years ago
- Evaluation and baseline scripts for the ASSIN shared task.☆11Updated 5 years ago
- Essay-BR: a corpus of essays for the Brazilian Portuguese language☆16Updated 2 years ago
- ☆11Updated 4 years ago
- ☆11Updated last year
- FaQuAD reading comprehension dataset and related code to reproduce experiments from Sayama et al. (BRACIS 2019).☆8Updated last year
- Code for equipping pretrained language models (BART, GPT-2, XLNet) with commonsense knowledge for generating implicit knowledge statement…☆16Updated 3 years ago
- Efficient Language Model Training through Cross-Lingual and Progressive Transfer Learning☆29Updated last year
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆58Updated 2 years ago
- HateBR is the first large-scale expert annotated dataset of Brazilian Instagram comments for hate speech and offensive language detection…☆26Updated 3 weeks ago
- Research code for the paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models"☆26Updated 3 years ago
- No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval☆27Updated 2 years ago
- Finetuning Stanford Alpaca (LLaMA) with Brazilian Portuguese data☆39Updated last year
- INCOME: An Easy Repository for Training and Evaluation of Index Compression Methods in Dense Retrieval. Includes BPR and JPQ.☆22Updated last year
- RATransformers 🐭- Make your transformer (like BERT, RoBERTa, GPT-2 and T5) Relation Aware!☆41Updated last year
- SPRINT Toolkit helps you evaluate diverse neural sparse models easily using a single click on any IR dataset.☆42Updated last year
- A multilingual version of MS MARCO passage ranking dataset☆141Updated last year
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆70Updated 8 months ago
- The official repository for Efficient Long-Text Understanding Using Short-Text Models (Ivgi et al., 2022) paper☆68Updated last year
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorch☆75Updated 3 years ago
- ☆39Updated last year
- As good as new. How to successfully recycle English GPT-2 to make models for other languages (ACL Findings 2021)☆46Updated 3 years ago
- A question-answering dataset with a focus on subjective information☆43Updated 10 months ago
- Portuguese translation of the SQuAD dataset☆18Updated 4 years ago