☆126Mar 17, 2024Updated last year
Alternatives and similar repositories for llm-continuous-batching-benchmarks
Users that are interested in llm-continuous-batching-benchmarks are comparing it to the libraries listed below
Sorting:
- ☆12Sep 1, 2023Updated 2 years ago
- A low-latency & high-throughput serving engine for LLMs☆482Jan 8, 2026Updated 2 months ago
- Benchmark for machine learning model online serving (LLM, embedding, Stable-Diffusion, Whisper)☆28Jun 28, 2023Updated 2 years ago
- ☆17Mar 28, 2022Updated 3 years ago
- This is an official GitHub repository for the paper, "Towards timeout-less transport in commodity datacenter networks.".☆16Oct 12, 2021Updated 4 years ago
- ☆42Sep 8, 2023Updated 2 years ago
- A large-scale simulation framework for LLM inference☆547Jul 25, 2025Updated 7 months ago
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆713Aug 13, 2024Updated last year
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,864Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆12Nov 14, 2025Updated 4 months ago
- Some microbenchmarks and design docs before commencement☆12Feb 1, 2021Updated 5 years ago
- ☆39Oct 3, 2022Updated 3 years ago
- ☆87Jun 2, 2022Updated 3 years ago
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,945Updated this week
- Torch Distributed Experimental☆117Aug 5, 2024Updated last year
- [COLM 2024] SKVQ: Sliding-window Key and Value Cache Quantization for Large Language Models☆24Oct 5, 2024Updated last year
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,621Jul 12, 2024Updated last year
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.☆506Aug 1, 2024Updated last year
- Implementation of TSM2L and TSM2R -- High-Performance Tall-and-Skinny Matrix-Matrix Multiplication Algorithms for CUDA☆35Jul 28, 2020Updated 5 years ago
- GPTQ inference Triton kernel☆321May 18, 2023Updated 2 years ago
- Manually implemented quantization-aware training☆23Oct 12, 2022Updated 3 years ago
- Seldon Core Operator for Kubernetes☆13Nov 5, 2019Updated 6 years ago
- ☆10Oct 7, 2019Updated 6 years ago
- Various test models in WNNX format. It can view with `pip install wnetron && wnetron`☆12Jun 22, 2022Updated 3 years ago
- ☆11Apr 3, 2023Updated 2 years ago
- FPGA-based HyperLogLog Accelerator☆12Jul 13, 2020Updated 5 years ago
- Serving multiple LoRA finetuned LLM as one☆1,145May 8, 2024Updated last year
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆23Mar 15, 2024Updated 2 years ago
- RayLLM - LLMs on Ray (Archived). Read README for more info.☆1,267Mar 13, 2025Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Jun 21, 2023Updated 2 years ago
- Transformer related optimization, including BERT, GPT☆6,397Mar 27, 2024Updated last year
- Large Context Attention☆769Oct 13, 2025Updated 5 months ago
- Elastic Deep Learning Training based on Kubernetes by Leveraging EDL and Volcano☆32May 19, 2023Updated 2 years ago
- Research and development for optimizing transformers☆131Feb 16, 2021Updated 5 years ago
- We present a set of all-reduce compatible gradient compression algorithms which significantly reduce the communication overhead while mai…☆10Nov 14, 2021Updated 4 years ago
- Studying GPU Multi-tenancy☆11Jan 11, 2019Updated 7 years ago
- ☆11Jun 29, 2021Updated 4 years ago
- ☆12Aug 13, 2022Updated 3 years ago
- Lightning In-Memory Object Store☆46Jan 22, 2022Updated 4 years ago