DRSY / EMOLinks
[ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)
☆126Updated last year
Alternatives and similar repositories for EMO
Users that are interested in EMO are comparing it to the libraries listed below
Sorting:
- Official Implementation for the ICML2022 paper "Directed Acyclic Transformer for Non-Autoregressive Machine Translation"☆132Updated 2 years ago
- Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022☆63Updated 3 years ago
- ☆142Updated last year
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆40Updated 2 years ago
- ☆25Updated 4 months ago
- Released code for our ICLR23 paper.☆66Updated 2 years ago
- Text Diffusion Model with Encoder-Decoder Transformers for Sequence-to-Sequence Generation [NAACL 2024]☆98Updated 2 years ago
- A paper list about diffusion models for natural language processing.☆182Updated 2 years ago
- Semi-autoregressive Simplex-based Diffusion Language Model for Text Generation and Modular Control☆75Updated 3 years ago
- ICLR2023 - Tailoring Language Generation Models under Total Variation Distance☆21Updated 2 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Updated 2 years ago
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Updated 2 years ago
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆143Updated 3 years ago
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆104Updated 3 years ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆167Updated last year
- [EMNLP 2022] Differentiable Data Augmentation for Contrastive Sentence Representation Learning. https://arxiv.org/abs/2210.16536☆40Updated 3 years ago
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆112Updated 3 years ago
- ☆108Updated 4 months ago
- Code for "Small Models are Valuable Plug-ins for Large Language Models"☆132Updated 2 years ago
- TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models☆82Updated last year
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆80Updated 2 years ago
- ☆193Updated last year
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Updated last year
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆76Updated last year
- ☆187Updated last year
- [ACL 2025, Main Conference, Oral] Intuitive Fine-Tuning: Towards Simplifying Alignment into a Single Process☆30Updated last year
- ☆215Updated 2 weeks ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆86Updated last year
- ☆33Updated 4 years ago
- self-adaptive in-context learning☆45Updated 2 years ago