john-hewitt / backpacks-flash-attn
The original Backpack Language Model implementation, a fork of FlashAttention
☆66Updated last year
Alternatives and similar repositories for backpacks-flash-attn:
Users that are interested in backpacks-flash-attn are comparing it to the libraries listed below
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆100Updated 2 years ago
- DiffusER: Discrete Diffusion via Edit-based Reconstruction (Reid, Hellendoorn & Neubig, 2022)☆54Updated last year
- ☆33Updated last year
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆138Updated 2 years ago
- [ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.☆97Updated last year
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆19Updated 5 months ago
- Retrieval as Attention☆83Updated 2 years ago
- ☆47Updated 10 months ago
- ☆23Updated last year
- ☆127Updated 2 years ago
- Semi-autoregressive Simplex-based Diffusion Language Model for Text Generation and Modular Control☆66Updated 2 years ago
- ☆85Updated 2 years ago
- [NeurIPS 2023] Repetition In Repetition Out: Towards Understanding Neural Text Degeneration from the Data Perspective☆30Updated last year
- DEMix Layers for Modular Language Modeling☆53Updated 3 years ago
- contrastive decoding☆192Updated 2 years ago
- ☆21Updated 2 years ago
- [TMLR'23] Contrastive Search Is What You Need For Neural Text Generation☆119Updated last year
- This is the implementation of the paper AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning (https://arxiv.org/abs/2205.1…☆127Updated last year
- ACL'23: Unified Demonstration Retriever for In-Context Learning☆36Updated last year
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆38Updated last year
- Repository for "Propagating Knowledge Updates to LMs Through Distillation" (NeurIPS 2023).☆25Updated 5 months ago
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆89Updated 3 years ago
- [NLPCC 2022] Kformer: Knowledge Injection in Transformer Feed-Forward Layers☆36Updated 2 years ago
- [NeurIPS'22 Spotlight] Data and code for our paper CoNT: Contrastive Neural Text Generation☆150Updated last year
- Code base of In-Context Learning for Dialogue State tracking☆45Updated last year
- ☆75Updated last year
- The official code of EMNLP 2022, "SCROLLS: Standardized CompaRison Over Long Language Sequences".☆69Updated last year
- NAACL 2022: MCSE: Multimodal Contrastive Learning of Sentence Embeddings☆55Updated 8 months ago
- TBC☆26Updated 2 years ago
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆74Updated last year