☆58May 4, 2024Updated last year
Alternatives and similar repositories for Pregated_MoE
Users that are interested in Pregated_MoE are comparing it to the libraries listed below
Sorting:
- Open-source of LazyDP published in ASPLOS-2024☆22May 5, 2024Updated last year
- ☆26Dec 3, 2025Updated 3 months ago
- ☆167Feb 1, 2025Updated last year
- Explore Inter-layer Expert Affinity in MoE Model Inference☆16May 6, 2024Updated last year
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆31Jul 4, 2024Updated last year
- Code release for AdapMoE accepted by ICCAD 2024☆35Apr 28, 2025Updated 10 months ago
- [ICML‘25] Official code for paper "Occult: Optimizing Collaborative Communication across Experts for Accelerated Parallel MoE Training an…☆13Apr 17, 2025Updated 10 months ago
- ☆131Nov 11, 2024Updated last year
- This repository presents the source code for the paper "MILLION: Mastering Long-Context LLM Inference Via Outlier-Immunized KV Product Qu…☆23Apr 2, 2025Updated 11 months ago
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆284Feb 26, 2026Updated last week
- [DATE'2025, TCAD'2025] Terafly : A Multi-Node FPGA Based Accelerator Design for Efficient Cooperative Inference in LLMs☆28Nov 13, 2025Updated 3 months ago
- Official PyTorch implementation of CD-MOE☆12Mar 29, 2025Updated 11 months ago
- [NAACL'25 🏆 SAC Award] Official code for "Advancing MoE Efficiency: A Collaboration-Constrained Routing (C2R) Strategy for Better Expert…☆15Feb 4, 2025Updated last year
- [ICLR 2025] Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better☆16Feb 15, 2025Updated last year
- ☆11Sep 20, 2024Updated last year
- Curated collection of papers in MoE model inference☆345Oct 20, 2025Updated 4 months ago
- ☆15Apr 11, 2024Updated last year
- ☆16Dec 9, 2023Updated 2 years ago
- Scaling Up Memory Disaggregated Applications with SMART☆34Apr 23, 2024Updated last year
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Sep 21, 2023Updated 2 years ago
- [EMNLP 2024] Quantize LLM to extremely low-bit, and finetune the quantized LLMs☆15Jul 18, 2024Updated last year
- AN EFFICIENT AND GENERAL FRAMEWORK FOR LAYERWISE-ADAPTIVE GRADIENT COMPRESSION☆14Oct 27, 2023Updated 2 years ago
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆30Dec 6, 2023Updated 2 years ago
- Official code for the paper "HEXA-MoE: Efficient and Heterogeneous-Aware MoE Acceleration with Zero Computation Redundancy"☆15Mar 6, 2025Updated last year
- STREAMer: Benchmarking remote volatile and non-volatile memory bandwidth☆17Aug 21, 2023Updated 2 years ago
- ☆15Mar 18, 2025Updated 11 months ago
- ☆38Oct 11, 2025Updated 4 months ago
- ☆17Jun 11, 2025Updated 8 months ago
- paper and its code for AI System☆351Feb 10, 2026Updated 3 weeks ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆374Jul 10, 2025Updated 7 months ago
- PIM-DL: Expanding the Applicability of Commodity DRAM-PIMs for Deep Learning via Algorithm-System Co-Optimization☆36Feb 21, 2024Updated 2 years ago
- ☆49Sep 26, 2025Updated 5 months ago
- Code for MLSys 2024 Paper "SiDA-MoE: Sparsity-Inspired Data-Aware Serving for Efficient and Scalable Large Mixture-of-Experts Models"☆22Apr 13, 2024Updated last year
- ☆24Apr 13, 2025Updated 10 months ago
- Book reading☆16Jun 5, 2020Updated 5 years ago
- ☆22Jun 4, 2023Updated 2 years ago
- Lab 5 project of MIT-6.5940, deploying LLaMA2-7B-chat on one's laptop with TinyChatEngine.☆18Dec 1, 2023Updated 2 years ago
- ☆20Feb 10, 2025Updated last year
- [ICLR 2025] Drop-Upcycling: Training Sparse Mixture of Experts with Partial Re-initialization☆25Oct 5, 2025Updated 5 months ago