tshrjn / Finetune-QALinks
BERT, RoBERTa fine-tuning over SQuAD Dataset using pytorch-lightningβ‘οΈ, π€-transformers & π€-nlp.
β36Updated 2 years ago
Alternatives and similar repositories for Finetune-QA
Users that are interested in Finetune-QA are comparing it to the libraries listed below
Sorting:
- Fine-tune transformers with pytorch-lightningβ44Updated 3 years ago
- Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.β105Updated 3 years ago
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 3 years ago
- On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselinesβ137Updated 2 years ago
- As good as new. How to successfully recycle English GPT-2 to make models for other languages (ACL Findings 2021)β48Updated 4 years ago
- Research code for the paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models"β27Updated 3 years ago
- [EMNLP 2021] LM-Critic: Language Models for Unsupervised Grammatical Error Correctionβ120Updated 3 years ago
- A BART version of an open-domain QA model in a closed-book setupβ119Updated 5 years ago
- This repositary hosts my experiments for the project, I did with OffNote Labs.β10Updated 4 years ago
- Code and models used in "MUSS Multilingual Unsupervised Sentence Simplification by Mining Paraphrases".β99Updated 2 years ago
- Pytorch Implementation of EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasksβ63Updated 3 years ago
- ARCHIVED. Please use https://docs.adapterhub.ml/huggingface_hub.html || π A central repository collecting pre-trained adapter modulesβ68Updated last year
- Code to reproduce the experiments from the paper.β101Updated last year
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.β147Updated 4 years ago
- β94Updated last year
- β57Updated 3 years ago
- Multilingual abstractive summarization dataset extracted from WikiHow.β95Updated 6 months ago
- β30Updated 4 years ago
- Training T5 to perform numerical reasoning.β24Updated 4 years ago
- Data & Code for ACCENTOR: "Adding Chit-Chat to Enhance Task-Oriented Dialogues" (NAACL 2021)β72Updated 3 years ago
- This is the official implementation of NeurIPS 2021 "One Question Answering Model for Many Languages with Cross-lingual Dense Passage Retβ¦β71Updated 3 years ago
- Cross-lingual GLUEβ49Updated 2 years ago
- An official implementation of "BPE-Dropout: Simple and Effective Subword Regularization" algorithm.β52Updated 4 years ago
- This is the official repository for NAACL 2021, "XOR QA: Cross-lingual Open-Retrieval Question Answering".β80Updated 4 years ago
- β75Updated 4 years ago
- A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformationsβ56Updated 3 years ago
- Build a dialog dataset from online books in many languagesβ76Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β95Updated 2 years ago
- Long-context pretrained encoder-decoder modelsβ96Updated 2 years ago
- Resources for the "CTRLsum: Towards Generic Controllable Text Summarization" paperβ147Updated 4 months ago