HFAiLab / hai-platform
一种任务级GPU算力分时调度的高性能深度学习训练平台
☆611Updated last year
Alternatives and similar repositories for hai-platform:
Users that are interested in hai-platform are comparing it to the libraries listed below
- Community maintained hardware plugin for vLLM on Ascend☆393Updated this week
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆678Updated 2 months ago
- FlagScale is a large model toolkit based on open-sourced projects.☆257Updated this week
- The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.☆964Updated this week
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆267Updated 2 years ago
- A flexible and efficient training framework for large-scale alignment tasks☆333Updated last month
- ☆324Updated 2 months ago
- Distributed RL System for LLM Reasoning☆201Updated 3 weeks ago
- ☆214Updated last year
- A streamlined and customizable framework for efficient large model evaluation and performance benchmarking☆726Updated this week
- GLake: optimizing GPU memory management and IO transmission.☆451Updated last week
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆240Updated 3 weeks ago
- DLRover: An Automatic Distributed Deep Learning System☆1,397Updated this week
- High-performance inference framework for large language models, focusing on efficiency, flexibility, and availability.☆1,053Updated this week
- 配合 HAI Platform 使用的集成化用户界面☆48Updated last year
- Best practice for training LLaMA models in Megatron-LM☆646Updated last year
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆2,948Updated this week
- A PyTorch Native LLM Training Framework☆763Updated 3 months ago
- ☆159Updated this week
- HFAI deep learning models☆148Updated last year
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆811Updated 2 weeks ago
- LLM Inference benchmark☆405Updated 8 months ago
- ☆274Updated last year
- Disaggregated serving system for Large Language Models (LLMs).☆529Updated 7 months ago
- ☆46Updated this week
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆471Updated last year
- ☆31Updated 2 years ago
- A highly optimized LLM inference acceleration engine for Llama and its variants.☆881Updated this week
- ☆107Updated last year
- InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencie…☆370Updated last week