hpcaitech / SwiftInfer
Efficient AI Inference & Serving
☆458Updated 10 months ago
Related projects ⓘ
Alternatives and complementary repositories for SwiftInfer
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆236Updated 8 months ago
- The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.☆438Updated last month
- C++ implementation of Qwen-LM☆554Updated 10 months ago
- LLM Inference benchmark☆350Updated 4 months ago
- Mixture-of-Experts (MoE) Language Model☆180Updated 2 months ago
- A flexible and efficient training framework for large-scale alignment tasks☆211Updated this week
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆137Updated 2 months ago
- ☆291Updated 4 months ago
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,015Updated 10 months ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆547Updated last month
- A high-performance inference system for large language models, designed for production environments.☆394Updated this week
- ☆290Updated last week
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆1,127Updated 3 months ago
- FlagScale is a large model toolkit based on open-sourced projects.☆178Updated this week
- Yuan 2.0 Large Language Model☆681Updated 4 months ago
- InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencie…☆310Updated this week
- 中文Mixtral-8x7B(Chinese-Mixtral-8x7B)☆641Updated 3 months ago
- ☆216Updated last year
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆457Updated 8 months ago
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆257Updated 6 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆124Updated 11 months ago
- ☆213Updated 6 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆126Updated 5 months ago
- FlagEval is an evaluation toolkit for AI large foundation models.☆302Updated 4 months ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆364Updated this week
- Official repository for LongChat and LongEval☆512Updated 5 months ago
- BiLLa: A Bilingual LLaMA with Enhanced Reasoning Ability☆421Updated last year
- A throughput-oriented high-performance serving framework for LLMs☆640Updated 2 months ago
- ☆145Updated this week
- [EMNLP 2024 Industry Track] This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a V…☆324Updated this week