pku-lemonade / TokenSimLinks
TokenSim is a tool for simulating the behavior of large language models (LLMs) in a distributed environment.
☆17Updated 2 months ago
Alternatives and similar repositories for TokenSim
Users that are interested in TokenSim are comparing it to the libraries listed below
Sorting:
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆160Updated 4 months ago
- LLM Inference analyzer for different hardware platforms☆96Updated 4 months ago
- LLM serving cluster simulator☆122Updated last year
- ☆209Updated last month
- TACOS: [T]opology-[A]ware [Co]llective Algorithm [S]ynthesizer for Distributed Machine Learning☆29Updated 5 months ago
- ☆55Updated 5 months ago
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆46Updated 11 months ago
- PIM-DL: Expanding the Applicability of Commodity DRAM-PIMs for Deep Learning via Algorithm-System Co-Optimization☆33Updated last year
- Repository for MLCommons Chakra schema and tools☆39Updated last year
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆55Updated last year
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆73Updated last month
- UPMEM LLM Framework allows profiling PyTorch layers and functions and simulate those layers/functions with a given hardware profile.☆36Updated 3 months ago
- NeuPIMs: NPU-PIM Heterogeneous Acceleration for Batched LLM Inferencing☆100Updated last year
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆91Updated 2 years ago
- ☆57Updated last year
- ASPLOS'24: Optimal Kernel Orchestration for Tensor Programs with Korch☆38Updated 8 months ago
- WaferLLM: Large Language Model Inference at Wafer Scale☆75Updated last month
- ☆24Updated 4 years ago
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆90Updated 3 years ago
- Proteus: A High-Throughput Inference-Serving System with Accuracy Scaling☆12Updated last year
- ☆139Updated last week
- ☆79Updated last month
- ☆23Updated last year
- ☆24Updated 3 years ago
- ☆158Updated last year
- Artifact for paper "PIM is All You Need: A CXL-Enabled GPU-Free System for LLM Inference", ASPLOS 2025☆105Updated 6 months ago
- ☆25Updated 2 years ago
- ☆159Updated last year
- ☆112Updated last year
- Artifact for PPoPP22 QGTC: Accelerating Quantized GNN via GPU Tensor Core.☆30Updated 3 years ago