xhluca / covid-qaLinks
A collection of COVID-19 question-answer pairs and transformer baselines for evaluating QA models (Official Repository)
☆26Updated 3 years ago
Alternatives and similar repositories for covid-qa
Users that are interested in covid-qa are comparing it to the libraries listed below
Sorting:
- Multitask Learning with Pretrained Transformers☆39Updated 4 years ago
- ☆59Updated 4 years ago
- Submission archive for the MS MARCO passage ranking leaderboard☆13Updated 2 years ago
- Code for Paper "Target-oriented Fine-tuning for Zero-Resource Named Entity Recognition"☆20Updated 3 years ago
- ☆37Updated last month
- Code for "Incorporating Relevance Feedback for Information-Seeking Retrieval using Few-Shot Document Re-Ranking" (https://arxiv.org/abs/2…☆14Updated 2 years ago
- Code and pre-trained models for "ReasonBert: Pre-trained to Reason with Distant Supervision", EMNLP'2021☆29Updated 2 years ago
- Knowledge graph based information retrieval☆13Updated 7 years ago
- Corresponding code repo for the paper at COLING 2020 - ARGMIN 2020: "DebateSum: A large-scale argument mining and summarization dataset"☆55Updated 4 years ago
- ☆24Updated 5 years ago
- Code for Relevance-guided Supervision for OpenQA with ColBERT (TACL'21)☆40Updated 4 years ago
- Implementation, trained models and result data for the paper "Aspect-based Document Similarity for Research Papers" #COLING2020☆63Updated last year
- ☆25Updated 6 years ago
- ☆54Updated 2 years ago
- EMNLP 2021 Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections☆52Updated 4 years ago
- On Generating Extended Summaries of Long Documents☆78Updated 4 years ago
- ☆42Updated 6 years ago
- ZS4IE: A Toolkit for Zero-Shot Information Extraction with Simple Verbalizations☆29Updated 3 years ago
- ☆68Updated 8 months ago
- Code for running the experiments in Deep Subjecthood: Higher Order Grammatical Features in Multilingual BERT☆17Updated 2 years ago
- KitanaQA: Adversarial training and data augmentation for neural question-answering models☆56Updated 2 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper☆52Updated 2 years ago
- ☆132Updated 2 years ago
- In-BoXBART: Get Instructions into Biomedical Multi-task Learning☆14Updated 3 years ago
- ☆26Updated 6 years ago
- Training a model without a dataset for natural language inference (NLI)☆25Updated 5 years ago
- ☆34Updated 2 years ago
- Truly Conversational Search is the next logic step in the journey to generate intelligent and useful AI. To understand what this may mean…☆113Updated 2 years ago
- Language Models as Few-Shot Learner for Task-Oriented Dialogue Systems☆22Updated 4 years ago
- SPRINT Toolkit helps you evaluate diverse neural sparse models easily using a single click on any IR dataset.☆47Updated 2 years ago