facebookresearch / SentAugment
SentAugment is a data augmentation technique for NLP that retrieves similar sentences from a large bank of sentences. It can be used in combination with self-training and knowledge-distillation, or for retrieving paraphrases.
☆362Updated 3 years ago
Alternatives and similar repositories for SentAugment:
Users that are interested in SentAugment are comparing it to the libraries listed below
- The corresponding code from our paper "DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations". Do not hesitate to o…☆380Updated 2 years ago
- [NAACL 2021] This is the code for our paper `Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self…☆202Updated 2 years ago
- On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines☆136Updated last year
- Repository for the paper "Optimal Subarchitecture Extraction for BERT"☆472Updated 2 years ago
- Pretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)☆328Updated last year
- New dataset☆304Updated 3 years ago
- Fork of huggingface/pytorch-pretrained-BERT for BERT on STILTs☆107Updated 2 years ago
- Interpretable Evaluation for (Almost) All NLP Tasks☆195Updated 2 years ago
- ☆345Updated 3 years ago
- SummVis is an interactive visualization tool for text summarization.☆252Updated 2 years ago
- Neural Text Generation with Unlikelihood Training☆309Updated 3 years ago
- DialoGLUE: A Natural Language Understanding Benchmark for Task-Oriented Dialogue☆283Updated last year
- A list of publications on NLP interpretability (Welcome PR)☆168Updated 4 years ago
- IEEE/ACM TASLP 2020: SBERT-WK: A Sentence Embedding Method By Dissecting BERT-based Word Models☆179Updated 4 years ago
- Authors' implementation of EMNLP-IJCNLP 2019 paper "Answering Complex Open-domain Questions Through Iterative Query Generation"☆195Updated 5 years ago
- ☆218Updated 4 years ago
- Awesome Neural Adaptation in Natural Language Processing. A curated list. https://arxiv.org/abs/2006.00632☆265Updated 3 years ago
- ☆97Updated 4 years ago
- Unsupervised Question answering via Cloze Translation☆219Updated 2 years ago
- Interpretable Evaluation for AI Systems☆364Updated 2 years ago
- This dataset contains 108,463 human-labeled and 656k noisily labeled pairs that feature the importance of modeling structure, context, an…☆557Updated 3 years ago
- Code associated with the Don't Stop Pretraining ACL 2020 paper☆529Updated 3 years ago
- Code and data to support the paper "PAQ 65 Million Probably-Asked Questions andWhat You Can Do With Them"☆202Updated 3 years ago
- For the code release of our arXiv paper "Revisiting Few-sample BERT Fine-tuning" (https://arxiv.org/abs/2006.05987).☆184Updated last year
- Materials for the EMNLP 2020 Tutorial on "Interpreting Predictions of NLP Models"☆199Updated 4 years ago
- Question Answering using Albert and Electra☆206Updated last year
- An Analysis Toolkit for Natural Language Generation (Translation, Captioning, Summarization, etc.)☆445Updated last month
- Repository containing code for "How to Train BERT with an Academic Budget" paper☆313Updated last year
- EMNLP 2020: "Dialogue Response Ranking Training with Large-Scale Human Feedback Data"☆339Updated 5 months ago
- An elaborate and exhaustive paper list for Named Entity Recognition (NER)☆393Updated 3 years ago