amazon-science / dq-bartLinks
DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization (ACL 2022)
☆50Updated 2 years ago
Alternatives and similar repositories for dq-bart
Users that are interested in dq-bart are comparing it to the libraries listed below
Sorting:
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Updated 2 years ago
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.☆48Updated 3 years ago
- ☆54Updated 2 years ago
- A library for parameter-efficient and composable transfer learning for NLP with sparse fine-tunings.☆75Updated last year
- ☆39Updated last year
- [TMLR'23] Contrastive Search Is What You Need For Neural Text Generation☆121Updated 2 years ago
- Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"☆48Updated 3 years ago
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆78Updated last year
- Adding new tasks to T0 without catastrophic forgetting☆33Updated 3 years ago
- Retrieval as Attention☆82Updated 2 years ago
- ☆24Updated 3 years ago
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆74Updated last year
- Method to improve inference time for BERT. This is an implementation of the paper titled "PoWER-BERT: Accelerating BERT Inference via Pro…☆62Updated 2 months ago
- [NAACL 2022] Robust (Controlled) Table-to-Text Generation with Structure-Aware Equivariance Learning.☆57Updated last year
- A Structured Span Selector (NAACL 2022). A structured span selector with a WCFG for span selection tasks (coreference resolution, semanti…☆21Updated 3 years ago
- Code for paper 'Data-Efficient FineTuning'☆28Updated 2 years ago
- An Empirical Study On Contrastive Search And Contrastive Decoding For Open-ended Text Generation☆27Updated last year
- Using business-level retrieval system (BM25) with Python in just a few lines.☆31Updated 2 years ago
- Task Compass: Scaling Multi-task Pre-training with Task Prefix (EMNLP 2022: Findings) (stay tuned & more will be updated)☆22Updated 3 years ago
- Long-context pretrained encoder-decoder models☆96Updated 3 years ago
- Transformers at any scale☆42Updated last year
- Code for the paper "BERT Loses Patience: Fast and Robust Inference with Early Exit".☆66Updated 4 years ago
- The Multitask Long Document Benchmark☆41Updated 3 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arx…☆138Updated 2 years ago
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆143Updated 3 years ago
- A extension of Transformers library to include T5ForSequenceClassification class.☆40Updated 2 years ago
- KETOD Knowledge-Enriched Task-Oriented Dialogue☆32Updated 2 years ago
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆98Updated 2 years ago
- PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models☆110Updated 3 years ago
- 🦮 Code and pretrained models for Findings of ACL 2022 paper "LaPraDoR: Unsupervised Pretrained Dense Retriever for Zero-Shot Text Retrie…☆49Updated 3 years ago