bentoml / BentoLMDeployLinks
Self-host LLMs with LMDeploy and BentoML
☆22Updated 3 weeks ago
Alternatives and similar repositories for BentoLMDeploy
Users that are interested in BentoLMDeploy are comparing it to the libraries listed below
Sorting:
- ☆64Updated 8 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆137Updated last year
- The official repo for "LLoCo: Learning Long Contexts Offline"☆118Updated last year
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- FuseAI Project☆88Updated last year
- [ICLR 2024] Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation☆184Updated last year
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆218Updated 7 months ago
- [ICML 2025] From Low Rank Gradient Subspace Stabilization to Low-Rank Weights: Observations, Theories and Applications☆52Updated 2 months ago
- LLM Serving Performance Evaluation Harness☆82Updated 11 months ago
- ☆102Updated last year
- ☆47Updated 8 months ago
- Repository for CPU Kernel Generation for LLM Inference☆27Updated 2 years ago
- This repository contains the code for the paper: SirLLM: Streaming Infinite Retentive LLM☆60Updated last year
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆163Updated 9 months ago
- Cascade Speculative Drafting☆32Updated last year
- KV cache compression for high-throughput LLM inference☆150Updated 11 months ago
- Easy, Fast, and Scalable Multimodal AI☆97Updated last week
- ☆92Updated last year
- ☆204Updated last year
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆37Updated 3 months ago
- ☆54Updated last year
- ☆38Updated last year
- ☆85Updated 2 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- Data preparation code for Amber 7B LLM☆94Updated last year
- QuIP quantization☆61Updated last year
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆176Updated last year
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆153Updated last year
- A pipeline for LLM knowledge distillation☆112Updated 9 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆138Updated last year