amazon-science / dq-bartLinks
DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization (ACL 2022)
☆50Updated 2 years ago
Alternatives and similar repositories for dq-bart
Users that are interested in dq-bart are comparing it to the libraries listed below
Sorting:
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.☆48Updated 3 years ago
- [TMLR'23] Contrastive Search Is What You Need For Neural Text Generation☆119Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- Code for paper 'Data-Efficient FineTuning'☆29Updated 2 years ago
- Using business-level retrieval system (BM25) with Python in just a few lines.☆31Updated 2 years ago
- ☆54Updated 2 years ago
- Retrieval as Attention☆83Updated 2 years ago
- ☆100Updated 2 years ago
- The official repository for Efficient Long-Text Understanding Using Short-Text Models (Ivgi et al., 2022) paper☆69Updated 2 years ago
- ☆38Updated 11 months ago
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆79Updated 10 months ago
- Long-context pretrained encoder-decoder models☆95Updated 2 years ago
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆73Updated last year
- [EACL 2023] CoTEVer: Chain of Thought Prompting Annotation Toolkit for Explanation Verification☆41Updated 2 years ago
- 🦮 Code and pretrained models for Findings of ACL 2022 paper "LaPraDoR: Unsupervised Pretrained Dense Retriever for Zero-Shot Text Retrie…☆49Updated 3 years ago
- A library for parameter-efficient and composable transfer learning for NLP with sparse fine-tunings.☆74Updated 11 months ago
- Can LLMs generate code-mixed sentences through zero-shot prompting?☆11Updated 2 years ago
- Pytorch Implementation of EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasks☆63Updated 3 years ago
- KETOD Knowledge-Enriched Task-Oriented Dialogue☆32Updated 2 years ago
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆98Updated 2 years ago
- PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models☆109Updated 3 years ago
- ACL22 paper: Imputing Out-of-Vocabulary Embeddings with LOVE Makes Language Models Robust with Little Cost☆41Updated last year
- ☆24Updated 2 years ago
- Transformers at any scale☆41Updated last year
- An Empirical Study On Contrastive Search And Contrastive Decoding For Open-ended Text Generation☆27Updated last year
- Easy modernBERT fine-tuning and multi-task learning☆59Updated 2 weeks ago
- Vocabulary Trimming (VT) is a model compression technique, which reduces a multilingual LM vocabulary to a target language by deleting ir…☆42Updated 8 months ago
- Repo for "On Learning to Summarize with Large Language Models as References"☆44Updated 2 years ago
- Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"☆47Updated 3 years ago
- ☆95Updated last year