modelscope / dash-inferLinks
DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including CUDA, x86 and ARMv9.
☆253Updated this week
Alternatives and similar repositories for dash-infer
Users that are interested in dash-infer are comparing it to the libraries listed below
Sorting:
- ☆127Updated 5 months ago
- FlagScale is a large model toolkit based on open-sourced projects.☆280Updated this week
- ☆138Updated last year
- ☆332Updated 4 months ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆777Updated 2 weeks ago
- ☆49Updated this week
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆242Updated last year
- LLM Inference benchmark☆419Updated 10 months ago
- InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencie…☆389Updated this week
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆100Updated last year
- ☆79Updated last year
- ☆166Updated this week
- ☆137Updated 2 months ago
- Mixture-of-Experts (MoE) Language Model☆188Updated 8 months ago
- A flexible and efficient training framework for large-scale alignment tasks☆364Updated this week
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆473Updated last year
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- export llama to onnx☆124Updated 5 months ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆506Updated this week
- [EMNLP 2024 Industry Track] This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a V…☆476Updated this week
- ☆193Updated 3 weeks ago
- Transformer related optimization, including BERT, GPT☆59Updated last year
- A quantization algorithm for LLM☆141Updated 11 months ago
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch☆369Updated this week
- llm-export can export llm model to onnx.☆292Updated 4 months ago
- Efficient AI Inference & Serving☆469Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆136Updated 5 months ago
- A collection of memory efficient attention operators implemented in the Triton language.☆270Updated 11 months ago
- ☆85Updated 2 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆124Updated last month