PolyAI-LDN / task-specific-datasets
A collection of task-specific NLU datasets
☆149Updated 2 years ago
Alternatives and similar repositories for task-specific-datasets:
Users that are interested in task-specific-datasets are comparing it to the libraries listed below
- Repository that accompanies "An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction" (EMNLP 2019)☆205Updated 3 years ago
- Copora for evaluating NLU Services/Platforms such as Dialogflow, LUIS, Watson, Rasa etc.☆110Updated 3 years ago
- DialoGLUE: A Natural Language Understanding Benchmark for Task-Oriented Dialogue☆282Updated last year
- DialogSum: A Real-life Scenario Dialogue Summarization Dataset - Findings of ACL 2021☆176Updated 3 months ago
- This repository contains datasets and code for the paper "HINT3: Raising the bar for Intent Detection in the Wild" accepted at EMNLP-2020…☆33Updated 4 years ago
- A repository for our AAAI-2020 Cross-lingual-NER paper. Code will be updated shortly.☆47Updated 2 years ago
- Few-Shot-Intent-Detection includes popular challenging intent detection datasets with/without OOS queries and state-of-the-art baselines …☆138Updated last year
- Coreference resolution with different higher-order inference methods; implemented in PyTorch.☆36Updated last year
- (yet another not really) awesome topic/text segmentation list☆108Updated 6 years ago
- ☆76Updated 2 years ago
- CrossWeigh: Training Named Entity Tagger from Imperfect Annotations☆177Updated 8 months ago
- ☆102Updated 3 years ago
- Pre-Trained Models for ToD-BERT☆292Updated last year
- "End-to-End Abstractive Summarization for Meetings" paper - Unofficial PyTorch Implementation☆53Updated 2 years ago
- Evidence-based QA system for community question answering.☆105Updated 4 years ago
- Massively Multilingual Transfer for NER☆86Updated 3 years ago
- Dual Encoders for State-of-the-art Natural Language Processing.☆61Updated 2 years ago
- SUPERT: Unsupervised multi-document summarization evaluation & generation☆94Updated 2 years ago
- Summarization Task using Bart and T5 models.