DRSY / EMO
[ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)
☆114Updated 8 months ago
Related projects ⓘ
Alternatives and complementary repositories for EMO
- ICLR2023 - Tailoring Language Generation Models under Total Variation Distance☆21Updated last year
- Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022☆58Updated 2 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆44Updated last year
- ☆14Updated 5 months ago
- [EMNLP 2023 Main] Sparse Low-rank Adaptation of Pre-trained Language Models☆69Updated 8 months ago
- Released code for our ICLR23 paper.☆63Updated last year
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆79Updated last year
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆38Updated last year
- ☆147Updated 4 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆73Updated 8 months ago
- 基于Gated Attention Unit的Transformer模型(尝鲜版)☆97Updated last year
- ☆108Updated 4 months ago
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆97Updated 2 years ago
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆97Updated last year
- Lion and Adam optimization comparison☆56Updated last year
- ☆72Updated 2 years ago
- [SIGIR'24] The official implementation code of MOELoRA.☆124Updated 3 months ago
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆58Updated 11 months ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆35Updated 7 months ago
- Mixture of Attention Heads☆39Updated 2 years ago
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆29Updated last year
- TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models☆59Updated 9 months ago
- ☆18Updated 5 months ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆144Updated 5 months ago
- ☆116Updated 3 months ago
- A paper list about diffusion models for natural language processing.☆174Updated last year
- [IJCAI'23] The official Github page of the paper "Diffusion Models for Non-autoregressive Text Generation: A Survey".☆48Updated 5 months ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆70Updated 9 months ago
- [ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"☆94Updated 7 months ago
- ☆76Updated 4 months ago