bentoml / BentoLMDeployLinks
Self-host LLMs with LMDeploy and BentoML
☆21Updated 4 months ago
Alternatives and similar repositories for BentoLMDeploy
Users that are interested in BentoLMDeploy are comparing it to the libraries listed below
Sorting:
- This repository contains the code for the paper: SirLLM: Streaming Infinite Retentive LLM☆60Updated last year
- The official repo for "LLoCo: Learning Long Contexts Offline"☆118Updated last year
- ☆60Updated 6 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆131Updated 11 months ago
- FuseAI Project☆87Updated 9 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year
- [ICML 2025] From Low Rank Gradient Subspace Stabilization to Low-Rank Weights: Observations, Theories and Applications☆51Updated 3 weeks ago
- Data preparation code for Amber 7B LLM☆93Updated last year
- QuIP quantization☆61Updated last year
- ☆38Updated last year
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- Easy, Fast, and Scalable Multimodal AI☆47Updated this week
- ☆51Updated last year
- ☆46Updated 6 months ago
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆37Updated last month
- Benchmark suite for LLMs from Fireworks.ai☆83Updated last week
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆162Updated 7 months ago
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆70Updated this week
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆206Updated 5 months ago
- DPO, but faster 🚀☆46Updated 11 months ago
- [ICLR 2024] Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation☆180Updated last year
- ☆52Updated last year
- Repository for CPU Kernel Generation for LLM Inference☆27Updated 2 years ago
- Cascade Speculative Drafting☆32Updated last year
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆156Updated 7 months ago
- Code repo for "CritiPrefill: A Segment-wise Criticality-based Approach for Prefilling Acceleration in LLMs".☆16Updated last year
- ☆100Updated last year
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
- KV cache compression for high-throughput LLM inference☆143Updated 9 months ago