BBuf / megatron-lm-parallel-group-playgroundLinks
☆16Updated last year
Alternatives and similar repositories for megatron-lm-parallel-group-playground
Users that are interested in megatron-lm-parallel-group-playground are comparing it to the libraries listed below
Sorting:
- Odysseus: Playground of LLM Sequence Parallelism☆78Updated last year
- ☆79Updated 2 years ago
- ☆97Updated 8 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Updated 5 months ago
- A simple calculation for LLM MFU.☆50Updated 3 months ago
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆52Updated last year
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Updated 6 months ago
- Summary of system papers/frameworks/codes/tools on training or serving large model☆57Updated last year
- Quantized Attention on GPU☆44Updated last year
- OneFlow Serving☆20Updated 8 months ago
- ☆114Updated 6 months ago
- ☆52Updated 6 months ago
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆61Updated last year
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆43Updated 9 months ago
- ☆153Updated 9 months ago
- An easy-to-use package for implementing SmoothQuant for LLMs☆110Updated 8 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year
- ☆12Updated 2 years ago
- ☆82Updated 7 months ago
- patches for huggingface transformers to save memory☆32Updated 6 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆16Updated last year
- Distributed IO-aware Attention algorithm☆22Updated 2 months ago
- ☆132Updated 6 months ago
- ☆65Updated 7 months ago
- Kernel Library Wheel for SGLang☆16Updated this week
- GPTQ inference TVM kernel☆40Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆132Updated 6 months ago
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆235Updated 3 weeks ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆251Updated 4 months ago
- Estimate MFU for DeepSeekV3☆26Updated 11 months ago