PKU-SEC-Lab / AdapMoELinks
Code release for AdapMoE accepted by ICCAD 2024
☆34Updated 7 months ago
Alternatives and similar repositories for AdapMoE
Users that are interested in AdapMoE are comparing it to the libraries listed below
Sorting:
- ☆57Updated last year
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆46Updated 11 months ago
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆20Updated last year
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆59Updated 8 months ago
- ☆207Updated last month
- [HPCA 2026] A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆63Updated last week
- LLM Inference with Microscaling Format☆32Updated last year
- Repo for SpecEE: Accelerating Large Language Model Inference with Speculative Early Exiting (ISCA25)☆68Updated 7 months ago
- ☆112Updated 2 years ago
- [DAC'25] Official implement of "HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference"☆90Updated 5 months ago
- Code Repository of Evaluating Quantized Large Language Models☆137Updated last year
- ☆15Updated last year
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆121Updated 4 months ago
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆114Updated last year
- ☆80Updated last year
- ☆58Updated last year
- ☆60Updated last year
- ☆24Updated last month
- [COLM 2024] SKVQ: Sliding-window Key and Value Cache Quantization for Large Language Models☆24Updated last year
- Explore Inter-layer Expert Affinity in MoE Model Inference☆15Updated last year
- ☆32Updated last week
- ☆23Updated last year
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆49Updated 3 months ago
- ☆137Updated last week
- [NeurIPS'25 Spotlight] Adaptive Attention Sparsity with Hierarchical Top-p Pruning☆69Updated 2 weeks ago
- [ICLR 2025] Palu: Compressing KV-Cache with Low-Rank Projection☆148Updated 9 months ago
- PIM-DL: Expanding the Applicability of Commodity DRAM-PIMs for Deep Learning via Algorithm-System Co-Optimization☆33Updated last year
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆68Updated last year
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆161Updated last year
- LLM Inference analyzer for different hardware platforms☆96Updated 4 months ago