codecaution / EvoMoELinks
☆19Updated 3 years ago
Alternatives and similar repositories for EvoMoE
Users that are interested in EvoMoE are comparing it to the libraries listed below
Sorting:
- ☆143Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Updated 2 years ago
- Long Context Extension and Generalization in LLMs☆62Updated last year
- Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding (EMNLP 2023 Long)☆65Updated last year
- [ICLR‘24 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆103Updated 7 months ago
- [ACL 2024] Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models☆113Updated last year
- ThinK: Thinner Key Cache by Query-Driven Pruning☆27Updated 11 months ago
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆33Updated 2 years ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆44Updated last year
- ☆39Updated last year
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆58Updated 11 months ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆195Updated last year
- This package implements THOR: Transformer with Stochastic Experts.☆65Updated 4 years ago
- Code for paper "Patch-Level Training for Large Language Models"☆97Updated 2 months ago
- ☆34Updated 2 years ago
- ☆27Updated 2 months ago
- [ACL 2023 Findings] Emergent Modularity in Pre-trained Transformers☆26Updated 2 years ago
- Codes for Merging Large Language Models☆35Updated last year
- Code for paper "Diffusion Language Models Can Perform Many Tasks with Scaling and Instruction-Finetuning"☆84Updated 2 years ago
- ☆112Updated last year
- [ICLR 2025] MiniPLM: Knowledge Distillation for Pre-Training Language Models☆71Updated last year
- [ICML2024 Spotlight] Fine-Tuning Pre-trained Large Language Models Sparsely☆24Updated last year
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆58Updated last year
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆84Updated last year
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆177Updated last year
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Updated last year
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆91Updated last year
- ☆18Updated last year
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆168Updated last year
- AnchorAttention: Improved attention for LLMs long-context training☆213Updated last year