argonne-lcf / LLM-Inference-Bench
LLM-Inference-Bench
☆38Updated 2 months ago
Alternatives and similar repositories for LLM-Inference-Bench:
Users that are interested in LLM-Inference-Bench are comparing it to the libraries listed below
- LLM Serving Performance Evaluation Harness☆73Updated last month
- ☆55Updated 9 months ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆203Updated last year
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆161Updated last week
- Stateful LLM Serving☆50Updated 3 weeks ago
- A resilient distributed training framework☆93Updated 11 months ago
- ☆91Updated 4 months ago
- A low-latency & high-throughput serving engine for LLMs☆334Updated 2 months ago
- LLM Inference analyzer for different hardware platforms☆55Updated 2 weeks ago
- nnScaler: Compiling DNN models for Parallel Training☆103Updated last month
- High performance Transformer implementation in C++.☆113Updated 2 months ago
- ☆65Updated 3 months ago
- An experimentation platform for LLM inference optimisation☆29Updated 6 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆103Updated 8 months ago
- A minimal implementation of vllm.☆37Updated 8 months ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆301Updated 9 months ago
- ☆94Updated 5 months ago
- ☆68Updated 2 months ago
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆34Updated this week
- ☆81Updated 3 years ago
- Microsoft Collective Communication Library☆64Updated 4 months ago
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆31Updated this week
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆113Updated last year
- PyTorch implementation of paper "Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline".☆85Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆77Updated 5 months ago
- A large-scale simulation framework for LLM inference☆356Updated 4 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆333Updated last week
- DeepSeek-V3/R1 inference performance simulator☆89Updated last week
- ☆103Updated 7 months ago
- [ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration☆202Updated 4 months ago