omni-ai-npu / omni-inferLinks
Omni_Infer is a suite of inference accelerators designed for the Ascend NPU platform, offering native support and an expanding feature set.
☆101Updated last week
Alternatives and similar repositories for omni-infer
Users that are interested in omni-infer are comparing it to the libraries listed below
Sorting:
- ☆522Updated last week
- FlagScale is a large model toolkit based on open-sourced projects.☆468Updated last week
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆274Updated 5 months ago
- Materials for learning SGLang☆728Updated 3 weeks ago
- ☆449Updated 5 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆120Updated last year
- Efficient and easy multi-instance LLM serving☆523Updated 4 months ago
- GLake: optimizing GPU memory management and IO transmission.☆497Updated 10 months ago
- ☆340Updated 3 weeks ago
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆67Updated last year
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆659Updated this week
- Disaggregated serving system for Large Language Models (LLMs).☆772Updated 9 months ago
- DeepSeek Native Sparse Attention pytorch implementation☆111Updated last month
- 青稞Talk☆189Updated last week
- LLM Inference with Deep Learning Accelerator.☆58Updated last year
- ☆155Updated 10 months ago
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆61Updated last year
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆1,031Updated last week
- SGLang kernel library for NPU☆95Updated last week
- Stateful LLM Serving☆95Updated 10 months ago
- ☆166Updated last month
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆251Updated last year
- ☆147Updated last year
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆264Updated last month
- A flexible and efficient training framework for large-scale alignment tasks☆447Updated 3 months ago
- A high-performance RL training-inference weight synchronization framework, designed to enable second-level parameter updates from trainin…☆129Updated last month
- Fast and memory-efficient exact attention☆110Updated last week
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆123Updated last month
- Accelerate inference without tears☆372Updated last week
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆223Updated 2 weeks ago