argonne-lcf / Megatron-DeepSpeedLinks
Ongoing research training transformer language models at scale, including: BERT & GPT-2
☆16Updated 2 weeks ago
Alternatives and similar repositories for Megatron-DeepSpeed
Users that are interested in Megatron-DeepSpeed are comparing it to the libraries listed below
Sorting:
- Material for the SC22 Deep Learning at Scale Tutorial☆41Updated 2 years ago
- Cosmic Tagging Network for Neutrino Physics☆13Updated last year
- This is a repository with examples to run inference endpoints on various ALCF clusters☆24Updated last week
- ☆21Updated 4 years ago
- SC24 Deep Learning at Scale Tutorial Material☆33Updated 6 months ago
- ☆49Updated 2 months ago
- Collection of small examples for running on ALCF resources☆19Updated 2 weeks ago
- ☆43Updated 3 weeks ago
- ALCF Computational Performance Workshop☆37Updated 2 years ago
- "wow, that is really fast." - Kyle Gerard Felker☆9Updated 3 years ago
- AI Training Series Material☆37Updated 10 months ago
- Benchmark implementation of CosmoFlow in TensorFlow Keras☆21Updated last year
- A parallel framework for training deep neural networks☆63Updated 4 months ago
- ☆50Updated this week
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆212Updated this week
- COCCL: Compression and precision co-aware collective communication library☆24Updated 4 months ago
- Sparsity support for PyTorch☆36Updated 4 months ago
- Guidelines on using Weights and Biases logging for deep learning applications on NERSC machines☆13Updated 2 years ago
- LLM checkpointing for DeepSpeed/Megatron☆19Updated 3 weeks ago
- Collection of kernels written in Triton language☆142Updated 4 months ago
- extensible collectives library in triton☆88Updated 4 months ago
- ☆28Updated 6 months ago
- This is repository for a I/O benchmark which represents Scientific Deep Learning Workloads.☆23Updated 2 years ago
- JaxPP is a library for JAX that enables flexible MPMD pipeline parallelism for large-scale LLM training☆52Updated last month
- Yaksa: High-performance Noncontiguous Data Management☆13Updated 10 months ago
- Train across all your devices, ezpz 🍋☆23Updated this week
- The ALCF hosts a regular simulation, data, and learning workshop to help users scale their applications. This repository contains the exa…☆64Updated 9 months ago
- Fastest kernels written from scratch☆310Updated 4 months ago
- High-Performance SGEMM on CUDA devices☆98Updated 6 months ago
- ☆131Updated 3 weeks ago