amazon-science / dq-bart
DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization (ACL 2022)
☆50Updated last year
Alternatives and similar repositories for dq-bart
Users that are interested in dq-bart are comparing it to the libraries listed below
Sorting:
- ☆44Updated 3 years ago
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.☆48Updated 2 years ago
- KETOD Knowledge-Enriched Task-Oriented Dialogue☆32Updated 2 years ago
- ☆54Updated 2 years ago
- ☆21Updated 3 years ago
- Source code for paper: Knowledge Inheritance for Pre-trained Language Models☆38Updated 3 years ago
- Transformers at any scale☆41Updated last year
- codes and pre-trained models of paper "Segatron: Segment-aware Transformer for Language Modeling and Understanding"☆18Updated 2 years ago
- Repo for "On Learning to Summarize with Large Language Models as References"☆44Updated last year
- ☆66Updated 3 years ago
- Pre-training BART in Flax on The Pile dataset☆21Updated 3 years ago
- ☆38Updated 9 months ago
- Pytorch Implementation of EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasks☆63Updated 3 years ago
- Retrieval as Attention☆83Updated 2 years ago
- [TMLR'23] Contrastive Search Is What You Need For Neural Text Generation☆119Updated 2 years ago
- ACL22 paper: Imputing Out-of-Vocabulary Embeddings with LOVE Makes Language Models Robust with Little Cost☆41Updated last year
- Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"☆47Updated 2 years ago
- Dense hybrid representations for text retrieval☆62Updated 2 years ago
- Code for the paper "A Theoretical Analysis of the Repetition Problem in Text Generation" in AAAI 2021.☆53Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- Method to improve inference time for BERT. This is an implementation of the paper titled "PoWER-BERT: Accelerating BERT Inference via Pro…☆61Updated last week
- ☆35Updated last year
- Mr. TyDi is a multi-lingual benchmark dataset built on TyDi, covering eleven typologically diverse languages.☆75Updated 3 years ago
- ☆44Updated 4 years ago
- Using business-level retrieval system (BM25) with Python in just a few lines.☆31Updated 2 years ago
- Code for paper 'Data-Efficient FineTuning'☆29Updated last year
- [NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining☆117Updated last year
- ☆24Updated 2 years ago
- Code for EMNLP 2021 paper: Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting☆17Updated 3 years ago
- The sources codes of the DR-BERT model and baselines☆37Updated 3 years ago