ROCm / Megatron-LM
Ongoing research training transformer models at scale
☆20Updated this week
Alternatives and similar repositories for Megatron-LM
Users that are interested in Megatron-LM are comparing it to the libraries listed below
Sorting:
- ☆34Updated this week
- ☆18Updated this week
- RCCL Performance Benchmark Tests☆64Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆76Updated this week
- ☆20Updated last month
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆61Updated 2 months ago
- Microsoft Collective Communication Library☆65Updated 5 months ago
- LLM-Inference-Bench☆40Updated 4 months ago
- ☆76Updated 4 months ago
- nnScaler: Compiling DNN models for Parallel Training☆110Updated 2 weeks ago
- PyTorch bindings for CUTLASS grouped GEMM.☆89Updated 2 weeks ago
- A hierarchical collective communications library with portable optimizations☆35Updated 5 months ago
- A CUTLASS implementation using SYCL☆22Updated this week
- ☆24Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆70Updated this week
- ☆109Updated last week
- ☆202Updated 10 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆109Updated 10 months ago
- ☆36Updated 5 months ago
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆22Updated this week
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆248Updated 6 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆121Updated 4 months ago
- Fast and memory-efficient exact attention☆174Updated this week
- Optimize GEMM with tensorcore step by step☆26Updated last year
- ☆79Updated 6 months ago
- LLM Inference analyzer for different hardware platforms☆66Updated 2 weeks ago
- A lightweight design for computation-communication overlap.☆113Updated last week
- Applied AI experiments and examples for PyTorch☆267Updated this week
- Anatomy of High-Performance GEMM with Online Fault Tolerance on GPUs☆12Updated last month
- ☆65Updated last month