ROCm / Megatron-LMLinks
Ongoing research training transformer models at scale
☆36Updated this week
Alternatives and similar repositories for Megatron-LM
Users that are interested in Megatron-LM are comparing it to the libraries listed below
Sorting:
- Microsoft Collective Communication Library☆66Updated last year
- MAD (Model Automation and Dashboarding)☆31Updated this week
- ☆159Updated last year
- QuickReduce is a performant all-reduce library designed for AMD ROCm that supports inline compression.☆36Updated 5 months ago
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆64Updated 7 months ago
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆168Updated this week
- ☆104Updated last year
- Autonomous GPU Kernel Generation via Deep Agents☆228Updated this week
- nnScaler: Compiling DNN models for Parallel Training☆124Updated 4 months ago
- ☆85Updated 3 months ago
- Accelerating MoE with IO and Tile-aware Optimizations☆569Updated 3 weeks ago
- ☆60Updated last week
- ☆47Updated last year
- AI Tensor Engine for ROCm☆351Updated this week
- torchcomms: a modern PyTorch communications API☆327Updated this week
- PyTorch bindings for CUTLASS grouped GEMM.☆142Updated 8 months ago
- Building the Virtuous Cycle for AI-driven LLM Systems☆151Updated this week
- extensible collectives library in triton☆95Updated 10 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆114Updated this week
- Fast and memory-efficient exact attention☆214Updated this week
- DeepSeek-V3/R1 inference performance simulator☆176Updated 10 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆276Updated 6 months ago
- A lightweight design for computation-communication overlap.☆219Updated 3 weeks ago
- Thunder Research Group's Collective Communication Library☆47Updated 7 months ago
- Applied AI experiments and examples for PyTorch☆315Updated 5 months ago
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆263Updated this week
- A NCCL extension library, designed to efficiently offload GPU memory allocated by the NCCL communication library.☆90Updated last month
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆165Updated 2 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆93Updated last year
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆70Updated 10 months ago