tshrjn / Finetune-QA
BERT, RoBERTa fine-tuning over SQuAD Dataset using pytorch-lightningβ‘οΈ, π€-transformers & π€-nlp.
β36Updated last year
Alternatives and similar repositories for Finetune-QA:
Users that are interested in Finetune-QA are comparing it to the libraries listed below
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 2 years ago
- A package for fine-tuning Transformers with TPUs, written in Tensorflow2.0+β37Updated 4 years ago
- Code and datasets of "Multilingual Extractive Reading Comprehension by Runtime Machine Translation"β40Updated 6 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β93Updated 2 years ago
- Fine-tune transformers with pytorch-lightningβ44Updated 3 years ago
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorchβ75Updated 4 years ago
- β74Updated 3 years ago
- EMNLP 2021 Tutorial: Multi-Domain Multilingual Question Answeringβ38Updated 3 years ago
- This is the official repository for NAACL 2021, "XOR QA: Cross-lingual Open-Retrieval Question Answering".β79Updated 3 years ago
- A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformationsβ55Updated 2 years ago
- On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselinesβ135Updated last year
- Mr. TyDi is a multi-lingual benchmark dataset built on TyDi, covering eleven typologically diverse languages.β74Updated 3 years ago
- β34Updated 4 years ago
- LM Pretraining with PyTorch/TPUβ134Updated 5 years ago
- This is the official implementation of NeurIPS 2021 "One Question Answering Model for Many Languages with Cross-lingual Dense Passage Retβ¦β70Updated 2 years ago
- β38Updated 2 years ago
- Pytorch Implementation of EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasksβ63Updated 3 years ago
- Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.β102Updated 2 years ago
- ARCHIVED. Please use https://docs.adapterhub.ml/huggingface_hub.html || π A central repository collecting pre-trained adapter modulesβ68Updated 9 months ago
- An official implementation of "BPE-Dropout: Simple and Effective Subword Regularization" algorithm.β50Updated 4 years ago
- Research code for the paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models"β26Updated 3 years ago
- [NAACL 2021] Designing a Minimal Retrieve-and-Read System for Open-Domain Question Answeringβ36Updated 3 years ago
- This repositary hosts my experiments for the project, I did with OffNote Labs.β10Updated 3 years ago
- [EMNLP 2021] LM-Critic: Language Models for Unsupervised Grammatical Error Correctionβ119Updated 3 years ago
- β77Updated 10 months ago
- A long version of BART model based on Longformer modelβ23Updated last year
- The source code of "Language Models are Few-shot Multilingual Learners" (MRL @ EMNLP 2021)β52Updated 2 years ago
- β31Updated 2 years ago
- The implementation of "Neural Machine Translation without Embeddings", NAACL 2021β33Updated 3 years ago
- Code to support the paper "Question and Answer Test-Train Overlap in Open-Domain Question Answering Datasets"β66Updated 3 years ago