ROCm / Megatron-LMLinks
Ongoing research training transformer models at scale
☆35Updated last week
Alternatives and similar repositories for Megatron-LM
Users that are interested in Megatron-LM are comparing it to the libraries listed below
Sorting:
- Microsoft Collective Communication Library☆66Updated last year
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆164Updated last week
- ☆158Updated last year
- ☆59Updated this week
- extensible collectives library in triton☆93Updated 10 months ago
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆63Updated 7 months ago
- MAD (Model Automation and Dashboarding)☆31Updated 2 weeks ago
- ☆102Updated last year
- QuickReduce is a performant all-reduce library designed for AMD ROCm that supports inline compression.☆36Updated 5 months ago
- torchcomms: a modern PyTorch communications API☆323Updated this week
- Autonomous GPU Kernel Generation via Deep Agents☆223Updated this week
- PyTorch bindings for CUTLASS grouped GEMM.☆141Updated 8 months ago
- Evaluating Large Language Models for CUDA Code Generation ComputeEval is a framework designed to generate and evaluate CUDA code from Lar…☆94Updated 3 weeks ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆276Updated 6 months ago
- Accelerating MoE with IO and Tile-aware Optimizations☆563Updated last week
- ☆83Updated 3 months ago
- Building the Virtuous Cycle for AI-driven LLM Systems☆140Updated this week
- ☆47Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆113Updated this week
- nnScaler: Compiling DNN models for Parallel Training☆124Updated 4 months ago
- AI Tensor Engine for ROCm☆344Updated last week
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆163Updated 2 months ago
- A lightweight design for computation-communication overlap.☆213Updated last week
- Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.☆158Updated last week
- Fast low-bit matmul kernels in Triton☆424Updated this week
- Fast and memory-efficient exact attention☆213Updated this week
- Applied AI experiments and examples for PyTorch☆314Updated 5 months ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Updated last year
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆82Updated last year
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆44Updated 3 years ago