ROCm / Megatron-LM
Ongoing research training transformer models at scale
☆18Updated last week
Alternatives and similar repositories for Megatron-LM:
Users that are interested in Megatron-LM are comparing it to the libraries listed below
- RCCL Performance Benchmark Tests☆60Updated 3 weeks ago
- ☆26Updated this week
- Microsoft Collective Communication Library☆64Updated 4 months ago
- ☆16Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆70Updated this week
- A hierarchical collective communications library with portable optimizations☆32Updated 3 months ago
- DeepSeek-V3/R1 inference performance simulator☆89Updated last week
- NCCL Profiling Kit☆128Updated 9 months ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆322Updated this week
- ☆60Updated 3 months ago
- LLM Inference analyzer for different hardware platforms☆55Updated last week
- AI Tensor Engine for ROCm☆142Updated this week
- ☆90Updated 3 weeks ago
- ☆36Updated 3 months ago
- LLM-Inference-Bench☆38Updated 2 months ago
- ☆76Updated 4 months ago
- nnScaler: Compiling DNN models for Parallel Training☆103Updated last month
- ☆21Updated last month
- Ahead of Time (AOT) Triton Math Library☆56Updated 2 weeks ago
- Multi-GPU communication profiler and visualizer☆28Updated 9 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆243Updated 5 months ago
- ☆92Updated 11 months ago
- This repository contains the results and code for the MLPerf™ Training v2.0 benchmark.☆28Updated last year
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆103Updated 8 months ago
- ☆20Updated last week
- Fast and memory-efficient exact attention☆163Updated this week
- CUDA Templates for Linear Algebra Subroutines☆16Updated this week
- Thunder Research Group's Collective Communication Library☆34Updated 11 months ago
- ☆193Updated 8 months ago
- A tool for generating information about the matrix multiplication instructions in AMD Radeon™ and AMD Instinct™ accelerators☆80Updated this week