vllm-project / dashboard
vLLM performance dashboard
☆27Updated last year
Alternatives and similar repositories for dashboard
Users that are interested in dashboard are comparing it to the libraries listed below
Sorting:
- ☆84Updated last month
- KV cache compression for high-throughput LLM inference☆126Updated 3 months ago
- ☆190Updated last week
- Boosting 4-bit inference kernels with 2:4 Sparsity☆73Updated 8 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆100Updated last year
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆116Updated 5 months ago
- ☆58Updated 2 weeks ago
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆51Updated 6 months ago
- ☆69Updated last month
- ☆75Updated 3 weeks ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆36Updated last month
- Fast and memory-efficient exact attention☆68Updated last week
- Vocabulary Parallelism☆19Updated 2 months ago
- An easy-to-use package for implementing SmoothQuant for LLMs☆97Updated last month
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆121Updated last month
- A safetensors extension to efficiently store sparse quantized tensors on disk☆109Updated this week
- ☆94Updated 8 months ago
- ☆132Updated 2 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆89Updated 2 weeks ago
- PyTorch bindings for CUTLASS grouped GEMM.☆121Updated 4 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆69Updated 10 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆103Updated this week
- ☆117Updated last year
- LLM Serving Performance Evaluation Harness☆78Updated 2 months ago
- A lightweight design for computation-communication overlap.☆92Updated last week
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆23Updated last year
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆41Updated 2 weeks ago
- DeeperGEMM: crazy optimized version☆69Updated last week
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆206Updated last year
- Benchmark suite for LLMs from Fireworks.ai☆72Updated this week