abhibambhaniya / GenZ-LLM-AnalyzerLinks
LLM Inference analyzer for different hardware platforms
☆97Updated 4 months ago
Alternatives and similar repositories for GenZ-LLM-Analyzer
Users that are interested in GenZ-LLM-Analyzer are comparing it to the libraries listed below
Sorting:
- ☆205Updated 3 weeks ago
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆157Updated 4 months ago
- LLM serving cluster simulator☆120Updated last year
- ☆54Updated 4 months ago
- ☆57Updated last year
- ☆90Updated 7 months ago
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆55Updated last year
- ☆159Updated last year
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆44Updated 11 months ago
- ☆136Updated 3 weeks ago
- PIM-DL: Expanding the Applicability of Commodity DRAM-PIMs for Deep Learning via Algorithm-System Co-Optimization☆33Updated last year
- DeepSeek-V3/R1 inference performance simulator☆168Updated 7 months ago
- WaferLLM: Large Language Model Inference at Wafer Scale☆73Updated 3 weeks ago
- Explore Inter-layer Expert Affinity in MoE Model Inference☆15Updated last year
- NeuPIMs: NPU-PIM Heterogeneous Acceleration for Batched LLM Inferencing☆100Updated last year
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆89Updated 2 years ago
- ☆18Updated last year
- ☆156Updated last year
- Artifact for paper "PIM is All You Need: A CXL-Enabled GPU-Free System for LLM Inference", ASPLOS 2025☆102Updated 6 months ago
- Automatic Mapping Generation, Verification, and Exploration for ISA-based Spatial Accelerators☆116Updated 3 years ago
- Repo for SpecEE: Accelerating Large Language Model Inference with Speculative Early Exiting (ISCA25)☆67Updated 6 months ago
- UPMEM LLM Framework allows profiling PyTorch layers and functions and simulate those layers/functions with a given hardware profile.☆36Updated 3 months ago
- TACOS: [T]opology-[A]ware [Co]llective Algorithm [S]ynthesizer for Distributed Machine Learning☆28Updated 5 months ago
- ☆79Updated last month
- A lightweight design for computation-communication overlap.☆187Updated last month
- ☆24Updated 3 years ago
- ☆16Updated 8 months ago
- ☆45Updated last year
- [DAC'25] Official implement of "HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference"☆89Updated 5 months ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆68Updated 8 months ago