argonne-lcf / Megatron-DeepSpeedLinks
Ongoing research training transformer language models at scale, including: BERT & GPT-2
☆17Updated last week
Alternatives and similar repositories for Megatron-DeepSpeed
Users that are interested in Megatron-DeepSpeed are comparing it to the libraries listed below
Sorting:
- Cosmic Tagging Network for Neutrino Physics☆13Updated last year
- Material for the SC22 Deep Learning at Scale Tutorial☆41Updated 2 years ago
- This is a repository with examples to run inference endpoints on various ALCF clusters☆26Updated last week
- Parallel framework for training and fine-tuning deep neural networks☆65Updated last week
- SC24 Deep Learning at Scale Tutorial Material☆33Updated 8 months ago
- COCCL: Compression and precision co-aware collective communication library☆27Updated 7 months ago
- Sparsity support for PyTorch☆37Updated 7 months ago
- ALCF Computational Performance Workshop☆38Updated 3 years ago
- Anatomy of High-Performance GEMM with Online Fault Tolerance on GPUs☆12Updated 6 months ago
- LLM checkpointing for DeepSpeed/Megatron☆21Updated 2 weeks ago
- ☆21Updated 4 years ago
- ☆51Updated 5 months ago
- Benchmark implementation of CosmoFlow in TensorFlow Keras☆21Updated last year
- Collection of small examples for running on ALCF resources☆19Updated 3 months ago
- ☆47Updated 3 months ago
- MLPerf™ logging library☆37Updated 2 weeks ago
- ☆16Updated last year
- Guidelines on using Weights and Biases logging for deep learning applications on NERSC machines☆13Updated 2 years ago
- A hands-on introduction to tuning GPU kernels using Kernel Tuner https://github.com/KernelTuner/kernel_tuner/☆35Updated 6 months ago
- ☆28Updated 9 months ago
- AI Training Series Material☆38Updated 3 weeks ago
- Reference implementations of MLPerf™ HPC training benchmarks☆49Updated 8 months ago
- An MPI wrapper for the pytorch tensor library that is automatically differentiable☆10Updated 2 years ago
- This repository contains companion software for the Colfax Research paper "Categorical Foundations for CuTe Layouts".☆71Updated last month
- This is repository for a I/O benchmark which represents Scientific Deep Learning Workloads.☆23Updated 2 years ago
- A tracing infrastructure for heterogeneous computing applications.☆36Updated this week
- ☆14Updated 2 years ago
- This is the open source version of HPL-MXP. The code performance has been verified on Frontier☆17Updated 3 months ago
- extensible collectives library in triton☆90Updated 7 months ago
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆93Updated this week