facebookresearch / SentAugment
SentAugment is a data augmentation technique for NLP that retrieves similar sentences from a large bank of sentences. It can be used in combination with self-training and knowledge-distillation, or for retrieving paraphrases.
☆362Updated 3 years ago
Alternatives and similar repositories for SentAugment
Users that are interested in SentAugment are comparing it to the libraries listed below
Sorting:
- Neural Text Generation with Unlikelihood Training☆309Updated 3 years ago
- ☆345Updated 3 years ago
- Pretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)☆329Updated last year
- [NAACL 2021] This is the code for our paper `Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self…☆203Updated 2 years ago
- DialoGLUE: A Natural Language Understanding Benchmark for Task-Oriented Dialogue☆283Updated last year
- Interpretable Evaluation for (Almost) All NLP Tasks☆195Updated 2 years ago
- The corresponding code from our paper "DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations". Do not hesitate to o…☆380Updated 2 years ago
- XLNet: fine tuning on RTX 2080 GPU - 8 GB☆154Updated 5 years ago
- Repository for the paper "Optimal Subarchitecture Extraction for BERT"☆472Updated 2 years ago
- This repository contains the code for "Generating Datasets with Pretrained Language Models".☆188Updated 3 years ago
- On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines☆136Updated last year
- Pre-Trained Models for ToD-BERT☆292Updated last year
- Enhancing the BERT training with Semi-supervised Generative Adversarial Networks☆228Updated 2 years ago
- Code associated with the Don't Stop Pretraining ACL 2020 paper☆530Updated 3 years ago
- An elaborate and exhaustive paper list for Named Entity Recognition (NER)☆393Updated 3 years ago
- Semantics-aware BERT for Language Understanding (AAAI 2020)☆287Updated 2 years ago
- Question Answering using Albert and Electra☆206Updated last year
- architectures and pre-trained models for long document classification.☆155Updated 4 years ago
- ☆96Updated 4 years ago
- New dataset☆304Updated 3 years ago
- IEEE/ACM TASLP 2020: SBERT-WK: A Sentence Embedding Method By Dissecting BERT-based Word Models☆179Updated 4 years ago
- Code and data to support the paper "PAQ 65 Million Probably-Asked Questions andWhat You Can Do With Them"☆202Updated 3 years ago
- ☆218Updated 4 years ago
- BOND: BERT-Assisted Open-Domain Name Entity Recognition with Distant Supervision☆290Updated 3 years ago
- Authors' implementation of EMNLP-IJCNLP 2019 paper "Answering Complex Open-domain Questions Through Iterative Query Generation"☆195Updated 5 years ago
- Code and Data for ACL 2020 paper "Few-Shot NLG with Pre-Trained Language Model"☆189Updated 2 years ago
- With the aim of building next generation virtual assistants that can handle multimodal inputs and perform multimodal actions, we introduc…☆132Updated last year
- Fork of huggingface/pytorch-pretrained-BERT for BERT on STILTs☆107Updated 2 years ago
- EMNLP 2020: "Dialogue Response Ranking Training with Large-Scale Human Feedback Data"☆340Updated 6 months ago
- Unsupervised Question answering via Cloze Translation☆219Updated 2 years ago