facebookresearch / bart_lsLinks
Long-context pretrained encoder-decoder models
☆94Updated 2 years ago
Alternatives and similar repositories for bart_ls
Users that are interested in bart_ls are comparing it to the libraries listed below
Sorting:
- Pytorch Implementation of EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasks☆63Updated 3 years ago
- Thank you BART! Rewarding Pre-Trained Models Improves Formality Style Transfer (ACL 2021)☆30Updated 2 years ago
- Simple Questions Generate Named Entity Recognition Datasets (EMNLP 2022)☆76Updated 2 years ago
- ☆90Updated last year
- ☆92Updated 3 years ago
- ☆38Updated 10 months ago
- Code for NAACL 2021 full paper "Efficient Attentions for Long Document Summarization"☆67Updated 3 years ago
- [NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining☆117Updated last year
- PyTorch reimplementation of the paper "SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization"☆16Updated 3 years ago
- Code and models for the paper "Questions Are All You Need to Train a Dense Passage Retriever (TACL 2023)"☆62Updated 2 years ago
- Pre-training BART in Flax on The Pile dataset☆21Updated 3 years ago
- Source code for paper "Learning from Noisy Labels for Entity-Centric Information Extraction", EMNLP 2021☆55Updated 3 years ago
- ☆68Updated last month
- [TMLR'23] Contrastive Search Is What You Need For Neural Text Generation☆119Updated 2 years ago
- Materials for "Natural Language Processing for Multilingual Task-Oriented Dialogue" Tutorial at ACL 2022☆14Updated 3 years ago
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.☆48Updated 2 years ago
- ☆117Updated 3 years ago
- Dataset for NAACL 2021 paper: "QMSum: A New Benchmark for Query-based Multi-domain Meeting Summarization"☆122Updated last year
- PyTorch code for "FactPEGASUS: Factuality-Aware Pre-training and Fine-tuning for Abstractive Summarization" (NAACL 2022)☆39Updated 2 years ago
- PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models☆109Updated 3 years ago
- ☆77Updated last year
- Continue Pretraining T5 on custom dataset based on available pretrained model checkpoints☆38Updated 4 years ago
- Resources for the "CTRLsum: Towards Generic Controllable Text Summarization" paper☆146Updated last month
- The Few-Shot Bot: Prompt-Based Learning for Dialogue Systems☆118Updated 3 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper☆52Updated last year
- Question Answering and Generation for Summarization☆71Updated 2 years ago
- The official repository for Efficient Long-Text Understanding Using Short-Text Models (Ivgi et al., 2022) paper☆69Updated 2 years ago
- [EMNLP'21] Mirror-BERT: Converting Pretrained Language Models to universal text encoders without labels.☆79Updated 2 years ago
- ☆54Updated 2 years ago
- EMNLP 2021 - CTC: A Unified Framework for Evaluating Natural Language Generation☆96Updated 2 years ago