omni-ai-npu / omni-inferLinks
Omni_Infer is a suite of inference accelerators designed for the Ascend NPU platform, offering native support and an expanding feature set.
☆73Updated last week
Alternatives and similar repositories for omni-infer
Users that are interested in omni-infer are comparing it to the libraries listed below
Sorting:
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆265Updated 2 months ago
- FlagScale is a large model toolkit based on open-sourced projects.☆358Updated last week
- ☆503Updated last month
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆115Updated last year
- Triton Documentation in Chinese Simplified / Triton 中文文档☆85Updated 5 months ago
- 青稞Talk☆148Updated 3 weeks ago
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆417Updated this week
- SGLang kernel library for NPU☆59Updated 2 weeks ago
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆197Updated 3 weeks ago
- ☆137Updated 3 months ago
- LLM Inference benchmark☆426Updated last year
- GLake: optimizing GPU memory management and IO transmission.☆479Updated 6 months ago
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆248Updated last year
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆58Updated 11 months ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆874Updated last week
- [DAC'25] Official implement of "HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference"☆72Updated 4 months ago
- Materials for learning SGLang☆597Updated last week
- Accelerate inference without tears☆333Updated 2 weeks ago
- ☆148Updated 7 months ago
- Fast and memory-efficient exact attention☆94Updated this week
- ☆430Updated 3 weeks ago
- DeepSeek Native Sparse Attention pytorch implementation☆100Updated 2 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆111Updated 4 months ago
- 中文版 llm-numbers☆124Updated last year
- ☆79Updated last year
- Efficient and easy multi-instance LLM serving☆494Updated last month
- ☆429Updated 2 months ago
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆108Updated 3 months ago
- Disaggregated serving system for Large Language Models (LLMs).☆700Updated 6 months ago
- Repo for SpecEE: Accelerating Large Language Model Inference with Speculative Early Exiting (ISCA25)☆60Updated 5 months ago