ranggihwang / Pregated_MoELinks
☆58Updated last year
Alternatives and similar repositories for Pregated_MoE
Users that are interested in Pregated_MoE are comparing it to the libraries listed below
Sorting:
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆53Updated last year
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆25Updated last year
- [HPCA'24] Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System☆52Updated 6 months ago
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆61Updated 10 months ago
- LLM Inference analyzer for different hardware platforms☆100Updated 2 months ago
- ☆164Updated last year
- [HPCA 2026] A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆80Updated last month
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆174Updated last year
- ☆26Updated last year
- PIM-DL: Expanding the Applicability of Commodity DRAM-PIMs for Deep Learning via Algorithm-System Co-Optimization☆36Updated last year
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆178Updated 6 months ago
- ☆81Updated 8 months ago
- ☆224Updated 3 months ago
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆56Updated last year
- FlashSparse significantly reduces the computation redundancy for unstructured sparsity (for SpMM and SDDMM) on Tensor Cores through a Swa…☆39Updated 4 months ago
- Accelerating Large-Scale Reasoning Model Inference with Sparse Self-Speculative Decoding☆88Updated 2 months ago
- [NeurIPS 2025] ClusterFusion: Expanding Operator Fusion Scope for LLM Inference via Cluster-Level Collective Primitive☆66Updated 2 months ago
- ☆27Updated 10 months ago
- [DAC'25] Official implement of "HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference"☆101Updated last month
- ☆83Updated last year
- Keyformer proposes KV Cache reduction through key tokens identification and without the need for fine-tuning☆58Updated last year
- Explore Inter-layer Expert Affinity in MoE Model Inference☆16Updated last year
- ☆16Updated last year
- ☆131Updated last year
- WaferLLM: Large Language Model Inference at Wafer Scale☆88Updated last month
- ☆45Updated last year
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆91Updated 3 years ago
- ☆28Updated last year
- A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores☆57Updated 2 years ago
- UPMEM LLM Framework allows profiling PyTorch layers and functions and simulate those layers/functions with a given hardware profile.☆37Updated 6 months ago