casys-kaist / LLMServingSim
LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale
☆80Updated 3 weeks ago
Alternatives and similar repositories for LLMServingSim:
Users that are interested in LLMServingSim are comparing it to the libraries listed below
- NeuPIMs Simulator☆67Updated 7 months ago
- ☆24Updated last year
- ☆46Updated last month
- [HPCA'24] Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System☆39Updated 10 months ago
- UPMEM LLM Framework allows profiling PyTorch layers and functions and simulate those layers/functions with a given hardware profile.☆17Updated 2 months ago
- ONNXim is a fast cycle-level simulator that can model multi-core NPUs for DNN inference☆87Updated last month
- ☆107Updated 6 months ago
- ☆21Updated 2 months ago
- LLM Inference analyzer for different hardware platforms☆47Updated this week
- ☆53Updated 7 months ago
- LLM serving cluster simulator☆90Updated 9 months ago
- Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access (ACM EuroSys '23)☆55Updated 10 months ago
- PIM-DL: Expanding the Applicability of Commodity DRAM-PIMs for Deep Learning via Algorithm-System Co-Optimization☆27Updated 11 months ago
- ☆37Updated 8 months ago
- ☆23Updated 2 years ago
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆49Updated 8 months ago
- ☆115Updated 2 weeks ago
- ☆11Updated 3 weeks ago
- ☆62Updated 4 years ago
- A Cycle-level simulator for M2NDP☆22Updated 2 months ago
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆13Updated 6 months ago
- ☆35Updated last year
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆102Updated 6 months ago
- Processing-In-Memory (PIM) Simulator☆147Updated last month
- ☆17Updated last year
- This repository is a meta package to provide Samsung OneMCC (Memory Coupled Computing) infrastructure.☆27Updated last year
- Stateful LLM Serving☆44Updated 6 months ago
- ☆33Updated 2 years ago
- Curated collection of papers in MoE model inference☆41Updated last week
- mNPUsim: A Cycle-accurate Multi-core NPU Simulator (IISWC 2023)☆44Updated last month