modelscope / dash-infer
DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including CUDA, x86 and ARMv9.
☆237Updated 2 weeks ago
Alternatives and similar repositories for dash-infer:
Users that are interested in dash-infer are comparing it to the libraries listed below
- ☆127Updated 2 months ago
- ☆323Updated 2 months ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆667Updated 2 months ago
- LLM Inference benchmark☆404Updated 8 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆93Updated last year
- ☆44Updated this week
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆238Updated last year
- FlagScale is a large model toolkit based on open-sourced projects.☆250Updated this week
- ☆139Updated 11 months ago
- InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencie…☆368Updated this week
- ☆125Updated 2 weeks ago
- export llama to onnx☆115Updated 2 months ago
- Mixture-of-Experts (MoE) Language Model☆185Updated 6 months ago
- Efficient AI Inference & Serving☆468Updated last year
- [EMNLP 2024 Industry Track] This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a V…☆434Updated last week
- Transformer related optimization, including BERT, GPT☆59Updated last year
- llm-export can export llm model to onnx.☆271Updated 2 months ago
- ☆157Updated this week
- A flexible and efficient training framework for large-scale alignment tasks☆333Updated last month
- A high-throughput and memory-efficient inference and serving engine for LLMs☆132Updated 3 months ago
- LLM 推理服务性能测试☆37Updated last year
- ☆78Updated last year
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆471Updated last year
- A quantization algorithm for LLM☆136Updated 9 months ago
- ☆180Updated 5 months ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆447Updated last month
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆108Updated last week
- ☆411Updated this week