efeslab / Nanoflow
A throughput-oriented high-performance serving framework for LLMs
☆737Updated 5 months ago
Alternatives and similar repositories for Nanoflow:
Users that are interested in Nanoflow are comparing it to the libraries listed below
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆729Updated 5 months ago
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆982Updated this week
- 10x Faster Long-Context LLM By Smart KV Cache Optimizations☆469Updated this week
- Dynamic Memory Management for Serving LLMs without PagedAttention☆290Updated this week
- FlashInfer: Kernel Library for LLM Serving☆2,078Updated this week
- QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving☆496Updated this week
- A low-latency & high-throughput serving engine for LLMs☆312Updated 3 weeks ago
- Disaggregated serving system for Large Language Models (LLMs).☆466Updated 6 months ago
- [NeurIPS'24 Spotlight, ICLR'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which r…☆917Updated last week
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆332Updated 6 months ago
- Fast, Flexible and Portable Structured Generation☆704Updated this week
- Official Implementation of EAGLE-1 (ICML'24) and EAGLE-2 (EMNLP'24)☆968Updated this week
- Materials for learning SGLang☆265Updated 2 weeks ago
- Ring attention implementation with flash attention☆674Updated 2 months ago
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆392Updated 5 months ago
- Serving multiple LoRA finetuned LLM as one☆1,028Updated 9 months ago
- Production ready LLM model compression/quantization toolkit with accelerated inference support for both cpu/gpu via HF, vLLM, and SGLang.☆284Updated this week
- Latency and Memory Analysis of Transformer Models for Training and Inference☆388Updated 3 months ago
- The Triton TensorRT-LLM Backend☆779Updated this week
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆424Updated this week
- LLM KV cache compression made easy☆397Updated this week
- Efficient and easy multi-instance LLM serving☆295Updated this week
- ☆314Updated 10 months ago
- A large-scale simulation framework for LLM inference☆325Updated 3 months ago
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆523Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆259Updated 4 months ago
- Serverless LLM Serving for Everyone.☆420Updated this week
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆295Updated 7 months ago
- Zero Bubble Pipeline Parallelism☆336Updated last week
- ☆172Updated 4 months ago