BBuf / megatron-lm-parallel-group-playgroundLinks
☆16Updated last year
Alternatives and similar repositories for megatron-lm-parallel-group-playground
Users that are interested in megatron-lm-parallel-group-playground are comparing it to the libraries listed below
Sorting:
- Odysseus: Playground of LLM Sequence Parallelism☆78Updated last year
- ☆79Updated last year
- Summary of system papers/frameworks/codes/tools on training or serving large model☆57Updated last year
- ☆97Updated 7 months ago
- OneFlow Serving☆20Updated 7 months ago
- ☆12Updated 2 years ago
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆52Updated last year
- A simple calculation for LLM MFU.☆50Updated 2 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆45Updated 5 months ago
- GPTQ inference TVM kernel☆39Updated last year
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆60Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year
- An easy-to-use package for implementing SmoothQuant for LLMs☆107Updated 7 months ago
- Quantized Attention on GPU☆44Updated last year
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Updated 4 months ago
- ☆50Updated 6 months ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆23Updated last year
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆249Updated 3 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆130Updated 5 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆42Updated 8 months ago
- Low-bit optimizers for PyTorch☆132Updated 2 years ago
- ☆109Updated 6 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆146Updated 3 months ago
- Distributed IO-aware Attention algorithm☆22Updated last month
- A lightweight reinforcement learning framework that integrates seamlessly into your codebase, empowering developers to focus on algorithm…☆81Updated 2 months ago
- ☆130Updated 5 months ago
- Triton implementation of Flash Attention2.0☆43Updated 2 years ago
- ☆151Updated 8 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆112Updated last year
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Updated 3 months ago