ROCm / Megatron-LMLinks
Ongoing research training transformer models at scale
☆34Updated 2 weeks ago
Alternatives and similar repositories for Megatron-LM
Users that are interested in Megatron-LM are comparing it to the libraries listed below
Sorting:
- Microsoft Collective Communication Library☆66Updated last year
- nnScaler: Compiling DNN models for Parallel Training☆121Updated 3 months ago
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆143Updated this week
- ☆99Updated last year
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆63Updated 6 months ago
- MAD (Model Automation and Dashboarding)☆30Updated last week
- ☆54Updated this week
- ☆152Updated last year
- ☆80Updated 2 months ago
- Autonomous GPU Kernel Generation via Deep Agents☆192Updated last week
- extensible collectives library in triton☆91Updated 8 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆135Updated 7 months ago
- Accelerating MoE with IO and Tile-aware Optimizations☆469Updated this week
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆230Updated 2 years ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆277Updated 5 months ago
- torchcomms: a modern PyTorch communications API☆314Updated this week
- Evaluating Large Language Models for CUDA Code Generation ComputeEval is a framework designed to generate and evaluate CUDA code from Lar…☆87Updated last month
- Fast and memory-efficient exact attention☆205Updated this week
- A NCCL extension library, designed to efficiently offload GPU memory allocated by the NCCL communication library.☆74Updated last week
- QuickReduce is a performant all-reduce library designed for AMD ROCm that supports inline compression.☆36Updated 4 months ago
- Github mirror of trition-lang/triton repo.☆111Updated this week
- RCCL Performance Benchmark Tests☆84Updated 2 weeks ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆69Updated 9 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆90Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆85Updated this week
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆43Updated 3 years ago
- LLM-Inference-Bench☆56Updated 5 months ago
- TritonBench: Benchmarking Large Language Model Capabilities for Generating Triton Operators☆99Updated 6 months ago
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆66Updated 3 months ago
- ☆77Updated 4 years ago