bentoml / BentoLMDeployLinks
Self-host LLMs with LMDeploy and BentoML
☆20Updated 2 weeks ago
Alternatives and similar repositories for BentoLMDeploy
Users that are interested in BentoLMDeploy are comparing it to the libraries listed below
Sorting:
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- Repository for CPU Kernel Generation for LLM Inference☆26Updated last year
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last year
- A repository for research on medium sized language models.☆76Updated last year
- ☆34Updated last month
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆47Updated 2 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆53Updated 4 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- ☆51Updated 7 months ago
- [ACL25' Findings] SWE-Dev is an SWE agent with a scalable test case construction pipeline.☆40Updated last week
- ☆47Updated 2 weeks ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
- Nexusflow function call, tool use, and agent benchmarks.☆20Updated 6 months ago
- ☆50Updated last year
- DPO, but faster 🚀☆43Updated 6 months ago
- QuIP quantization☆54Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- ☆41Updated 6 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- ☆37Updated 8 months ago
- Cascade Speculative Drafting☆29Updated last year
- ☆20Updated last year
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆116Updated 6 months ago
- Verifiers for LLM Reinforcement Learning☆60Updated 2 months ago
- ☆16Updated 3 months ago
- ☆53Updated last year
- Query-agnostic KV cache eviction: 3–4× reduction in memory and 2× decrease in latency (Qwen3/2.5, Gemma3, LLaMA3)☆85Updated 2 weeks ago
- ☆97Updated last month
- ☆36Updated 2 years ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆55Updated 3 weeks ago