minggg / squadLinks
Starter code for Stanford CS224n default final project on SQuAD 2.0
☆33Updated 3 years ago
Alternatives and similar repositories for squad
Users that are interested in squad are comparing it to the libraries listed below
Sorting:
- ☆93Updated 5 years ago
- ☆96Updated 4 years ago
- Implementing Skip-gram Negative Sampling with pytorch☆49Updated 6 years ago
- A neural machine translation model in PyTorch☆119Updated 5 years ago
- RefNet for Question Generation☆46Updated 4 years ago
- Pytorch implementation of Bert and Pals: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning (https://arxiv.org/ab…☆82Updated 6 years ago
- This is a list of open-source projects at Microsoft Research NLP Group☆110Updated 4 years ago
- Starter code for Stanford CS224n default final project on SQuAD 2.0☆186Updated 5 years ago
- ☆38Updated 5 years ago
- My Programming Assignments for CS224n: Natural Language Processing with Deep Learning - Winter 2019☆33Updated 6 years ago
- Re-implement "QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension"☆120Updated 6 years ago
- Augmented version of SQUAD 2.0 for Questions☆33Updated 6 years ago
- ☆40Updated 5 years ago
- ☆75Updated 2 years ago
- We summarize the summarization papers presented at major conferences (starting with ACL 2019)☆85Updated 5 years ago
- ☆85Updated 5 years ago
- Semi-supervised Learning for Sentiment Analysis☆54Updated 4 years ago
- Source code for the ACL 2019 paper entitled "Domain Adaptive Dialog Generation via Meta Learning" by Kun Qian and Zhou Yu☆41Updated 5 years ago
- Paper collection of Neural Text Generation☆51Updated 6 years ago
- ☆24Updated 5 years ago
- MTM☆142Updated 2 years ago
- Tutorials on training and testing retrieval-based models (DrQA & DPR)☆51Updated 4 years ago
- Easy Data Augmentation for NLP on Chinese☆15Updated 5 years ago
- This is the PyTorch implementation of the ACL 2019 paper RankQA: Neural Question Answering with Answer Re-Ranking.☆83Updated 3 years ago
- A PyTorch implementation of a Bi-LSTM CRF with character-level features☆63Updated 6 years ago
- Code for the RecAdam paper: Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting.☆117Updated 4 years ago
- Worth-reading papers and related resources on attention mechanism, Transformer and pretrained language model (PLM) such as BERT. 值得一读的注意力…☆132Updated 4 years ago
- ☆63Updated 4 years ago
- Notes of my introduction about NLP in Fudan University☆37Updated 3 years ago
- a simple yet complete implementation of the popular BERT model☆127Updated 5 years ago