NVIDIA / jaxppLinks
JaxPP is a library for JAX that enables flexible MPMD pipeline parallelism for large-scale LLM training
☆61Updated last week
Alternatives and similar repositories for jaxpp
Users that are interested in jaxpp are comparing it to the libraries listed below
Sorting:
- extensible collectives library in triton☆91Updated 8 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆306Updated this week
- ☆99Updated last year
- Autonomous GPU Kernel Generation via Deep Agents☆192Updated last week
- ☆268Updated last week
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆697Updated this week
- A bunch of kernels that might make stuff slower 😉☆69Updated last week
- PyTorch bindings for CUTLASS grouped GEMM.☆135Updated 7 months ago
- Ship correct and fast LLM kernels to PyTorch☆127Updated last week
- Triton-based Symmetric Memory operators and examples☆70Updated 2 months ago
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆179Updated this week
- Applied AI experiments and examples for PyTorch☆311Updated 4 months ago
- Helpful kernel tutorials and examples for tile-based GPU programming☆501Updated this week
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆104Updated 6 months ago
- ring-attention experiments☆160Updated last year
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆148Updated last month
- Collection of kernels written in Triton language☆173Updated 8 months ago
- Accelerating MoE with IO and Tile-aware Optimizations☆469Updated this week
- This repository contains the experimental PyTorch native float8 training UX☆227Updated last year
- ☆254Updated last year
- Nsight Python is a Python kernel profiling interface based on NVIDIA Nsight Tools☆77Updated last week
- ☆127Updated 2 months ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆217Updated 2 weeks ago
- Framework to reduce autotune overhead to zero for well known deployments.☆91Updated 3 months ago
- ☆152Updated last year
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆143Updated this week
- Cataloging released Triton kernels.☆280Updated 3 months ago
- TORCH_LOGS parser for PT2☆70Updated last month
- ☆115Updated 7 months ago
- ☆115Updated last year