bentoml / BentoLMDeploy
Self-host LLMs with LMDeploy and BentoML
☆18Updated last month
Alternatives and similar repositories for BentoLMDeploy
Users that are interested in BentoLMDeploy are comparing it to the libraries listed below
Sorting:
- Repository for CPU Kernel Generation for LLM Inference☆26Updated last year
- A toolkit for fine-tuning, inferencing, and evaluating GreenBitAI's LLMs.☆83Updated 2 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆116Updated 5 months ago
- ☆91Updated 7 months ago
- Data preparation code for CrystalCoder 7B LLM☆44Updated last year
- ☆37Updated 7 months ago
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆47Updated 3 weeks ago
- ☆54Updated this week
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last year
- Cascade Speculative Drafting☆29Updated last year
- ☆27Updated 2 weeks ago
- ☆43Updated last year
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆59Updated 7 months ago
- Work in progress.☆61Updated last month
- ☆48Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆41Updated last year
- QuIP quantization☆52Updated last year
- ☆45Updated 2 months ago
- Benchmark suite for LLMs from Fireworks.ai☆72Updated this week
- The official repo for "LLoCo: Learning Long Contexts Offline"☆116Updated 10 months ago
- A repository for research on medium sized language models.☆76Updated 11 months ago
- Verifiers for LLM Reinforcement Learning☆18Updated 3 weeks ago
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆41Updated last year
- Simple extension on vLLM to help you speed up reasoning model without training.☆149Updated last week
- This repository contains the code for the paper: SirLLM: Streaming Infinite Retentive LLM☆57Updated 11 months ago
- vLLM adapter for a TGIS-compatible gRPC server.☆27Updated this week
- Code repo for "CritiPrefill: A Segment-wise Criticality-based Approach for Prefilling Acceleration in LLMs".☆14Updated 7 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆41Updated 2 weeks ago
- ☆72Updated 3 weeks ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆39Updated last year