volcengine / veScale
A PyTorch Native LLM Training Framework
☆732Updated last month
Alternatives and similar repositories for veScale:
Users that are interested in veScale are comparing it to the libraries listed below
- Zero Bubble Pipeline Parallelism☆336Updated last week
- Disaggregated serving system for Large Language Models (LLMs).☆466Updated 6 months ago
- Ring attention implementation with flash attention☆674Updated 2 months ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆424Updated this week
- GLake: optimizing GPU memory management and IO transmission.☆431Updated 2 months ago
- A throughput-oriented high-performance serving framework for LLMs☆737Updated 5 months ago
- FlashInfer: Kernel Library for LLM Serving☆2,078Updated this week
- A fast communication-overlapping library for tensor parallelism on GPUs.☆296Updated 3 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆290Updated this week
- Efficient and easy multi-instance LLM serving☆295Updated this week
- Microsoft Automatic Mixed Precision Library☆567Updated 4 months ago
- A low-latency & high-throughput serving engine for LLMs☆312Updated 3 weeks ago
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆392Updated 5 months ago
- A large-scale simulation framework for LLM inference☆325Updated 3 months ago
- Latency and Memory Analysis of Transformer Models for Training and Inference☆388Updated 3 months ago
- ☆314Updated 10 months ago
- FlagScale is a large model toolkit based on open-sourced projects.☆223Updated this week
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆469Updated 11 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆729Updated 5 months ago
- Large Language Model (LLM) Systems Paper List☆778Updated this week
- ☆538Updated 5 months ago
- A collection of memory efficient attention operators implemented in the Triton language.☆237Updated 8 months ago
- Best practice for training LLaMA models in Megatron-LM☆644Updated last year
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆629Updated last month
- Materials for learning SGLang☆265Updated 2 weeks ago
- 10x Faster Long-Context LLM By Smart KV Cache Optimizations☆469Updated this week
- Puzzles for learning Triton, play it with minimal environment configuration!☆229Updated 2 months ago
- FlagGems is an operator library for large language models implemented in Triton Language.☆420Updated this week
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆297Updated this week
- QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving☆496Updated this week