ROCm / Megatron-LM
Ongoing research training transformer models at scale
☆13Updated this week
Alternatives and similar repositories for Megatron-LM:
Users that are interested in Megatron-LM are comparing it to the libraries listed below
- LLM Inference analyzer for different hardware platforms☆47Updated this week
- ☆15Updated this week
- RCCL Performance Benchmark Tests☆55Updated 2 weeks ago
- ☆36Updated last month
- Microsoft Collective Communication Library☆61Updated 2 months ago
- ☆67Updated last month
- A high-throughput and memory-efficient inference and serving engine for LLMs☆57Updated this week
- ☆73Updated 2 years ago
- ☆48Updated 7 months ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆77Updated 2 months ago
- ☆180Updated 6 months ago
- ☆36Updated last month
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆94Updated 6 months ago
- ☆18Updated 2 months ago
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆17Updated this week
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆232Updated 3 months ago
- ☆84Updated 9 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆86Updated 3 weeks ago
- PyTorch bindings for CUTLASS grouped GEMM.☆61Updated 2 months ago
- A tool for generating information about the matrix multiplication instructions in AMD Radeon™ and AMD Instinct™ accelerators☆73Updated last year
- ☆79Updated 4 months ago
- ☆64Updated 2 months ago
- Test suite for probing the numerical behavior of NVIDIA tensor cores☆37Updated 6 months ago
- This repository contains the results and code for the MLPerf™ Training v2.0 benchmark.☆27Updated 11 months ago
- nnScaler: Compiling DNN models for Parallel Training☆87Updated 3 weeks ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆99Updated 4 months ago
- TransferBench is a utility capable of benchmarking simultaneous copies between user-specified devices (CPUs/GPUs)☆38Updated this week
- ☆82Updated 2 months ago
- An experimental CPU backend for Triton☆81Updated last week
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆37Updated 2 years ago