ranggihwang / Pregated_MoELinks
☆47Updated last year
Alternatives and similar repositories for Pregated_MoE
Users that are interested in Pregated_MoE are comparing it to the libraries listed below
Sorting:
- Open-source of LazyDP published in ASPLOS-2024☆22Updated last year
- ☆68Updated last week
- ☆20Updated 5 months ago
- ☆142Updated 11 months ago
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆14Updated 11 months ago
- [HPCA'24] Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System☆44Updated last year
- ☆24Updated 6 months ago
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆117Updated 3 months ago
- NeuPIMs: NPU-PIM Heterogeneous Acceleration for Batched LLM Inferencing☆83Updated 11 months ago
- LLM Inference analyzer for different hardware platforms☆69Updated last week
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆51Updated last year
- Artifact for "Marconi: Prefix Caching for the Era of Hybrid LLMs" [MLSys '25 Outstanding Paper Honorable Mention]☆10Updated 2 months ago
- ☆21Updated last year
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆40Updated 5 months ago
- Artifact for paper "PIM is All You Need: A CXL-Enabled GPU-Free System for LLM Inference", ASPLOS 2025☆63Updated last month
- ☆69Updated 11 months ago
- UPMEM LLM Framework allows profiling PyTorch layers and functions and simulate those layers/functions with a given hardware profile.☆29Updated this week
- LLM serving cluster simulator☆100Updated last year
- ☆140Updated 4 months ago
- ☆36Updated last month
- Proteus: A High-Throughput Inference-Serving System with Accuracy Scaling☆12Updated last year
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆47Updated 2 months ago
- ☆58Updated last year
- ☆97Updated last year
- ☆17Updated 2 months ago
- PIM-DL: Expanding the Applicability of Commodity DRAM-PIMs for Deep Learning via Algorithm-System Co-Optimization☆31Updated last year
- ☆108Updated last week
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆136Updated 10 months ago
- Sharing the codebase and steps for artifact evaluation/reproduction for MICRO 2024 paper☆9Updated 9 months ago
- ☆31Updated 11 months ago