bentoml / BentoLMDeploy
Self-host LLMs with LMDeploy and BentoML
☆18Updated 2 weeks ago
Alternatives and similar repositories for BentoLMDeploy:
Users that are interested in BentoLMDeploy are comparing it to the libraries listed below
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆44Updated 8 months ago
- A toolkit for fine-tuning, inferencing, and evaluating GreenBitAI's LLMs.☆79Updated 2 weeks ago
- Cascade Speculative Drafting☆29Updated 11 months ago
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆35Updated 11 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆111Updated 3 months ago
- Repository for CPU Kernel Generation for LLM Inference☆25Updated last year
- Data preparation code for CrystalCoder 7B LLM☆44Updated 10 months ago
- ☆37Updated 5 months ago
- ☆50Updated 5 months ago
- ☆80Updated last month
- Train, tune, and infer Bamba model☆87Updated 2 months ago
- Work in progress.☆50Updated 2 weeks ago
- ☆53Updated 10 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆116Updated 9 months ago
- QuIP quantization☆52Updated last year
- Data preparation code for Amber 7B LLM☆86Updated 10 months ago
- ☆67Updated last week
- A repository for research on medium sized language models.☆76Updated 10 months ago
- ☆46Updated last year
- DPO, but faster 🚀☆40Updated 3 months ago
- ☆39Updated last month
- Pre-training code for CrystalCoder 7B LLM☆54Updated 10 months ago
- Simple extension on vLLM to help you speed up reasoning model without training.☆139Updated 3 weeks ago
- ☆45Updated 9 months ago
- ☆20Updated 9 months ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆31Updated 7 months ago
- FuseAI Project☆84Updated 2 months ago
- This repository contains the code for the paper: SirLLM: Streaming Infinite Retentive LLM☆57Updated 10 months ago
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆90Updated last week