volcengine / veScale
A PyTorch Native LLM Training Framework
☆783Updated 3 months ago
Alternatives and similar repositories for veScale:
Users that are interested in veScale are comparing it to the libraries listed below
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆864Updated 2 weeks ago
- A throughput-oriented high-performance serving framework for LLMs☆794Updated 6 months ago
- Disaggregated serving system for Large Language Models (LLMs).☆550Updated last week
- Zero Bubble Pipeline Parallelism☆382Updated last week
- Ring attention implementation with flash attention☆734Updated last week
- Distributed Triton for Parallel Systems