vllm-project / vllm-omniLinks
A framework for efficient model inference with omni-modality models
☆466Updated this week
Alternatives and similar repositories for vllm-omni
Users that are interested in vllm-omni are comparing it to the libraries listed below
Sorting:
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆498Updated this week
- ☆439Updated 3 months ago
- Efficient LLM Inference over Long Sequences☆392Updated 5 months ago
- ☆205Updated 6 months ago
- A quantization algorithm for LLM☆146Updated last year
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆605Updated last month
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆254Updated this week
- Model compression toolkit engineered for enhanced usability, comprehensiveness, and efficiency.☆212Updated this week
- KV cache compression for high-throughput LLM inference☆145Updated 9 months ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆507Updated 9 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆148Updated 3 months ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆196Updated 2 months ago
- A safetensors extension to efficiently store sparse quantized tensors on disk☆210Updated 2 weeks ago
- FlagScale is a large model toolkit based on open-sourced projects.☆416Updated last week
- ☆152Updated 8 months ago
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆267Updated 3 months ago
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆327Updated this week
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆786Updated 8 months ago
- ☆97Updated 8 months ago
- An industrial extension library of pytorch to accelerate large scale model training☆54Updated 3 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆132Updated last year
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆391Updated last year
- HuggingFace conversion and training library for Megatron-based models☆228Updated this week
- A throughput-oriented high-performance serving framework for LLMs☆918Updated last month
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆250Updated 3 months ago
- ☆324Updated 3 weeks ago
- Materials for learning SGLang☆658Updated last week
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆132Updated last week
- Code for data-aware compression of DeepSeek models☆64Updated 3 weeks ago
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆205Updated last month