axonn-ai / axonn
A parallel framework for training deep neural networks
☆57Updated last week
Alternatives and similar repositories for axonn:
Users that are interested in axonn are comparing it to the libraries listed below
- extensible collectives library in triton☆84Updated 6 months ago
- Sparsity support for PyTorch☆35Updated this week
- Experiment of using Tangent to autodiff triton☆78Updated last year
- ☆27Updated 2 months ago
- ☆73Updated 4 months ago
- LLM training in simple, raw C/CUDA☆92Updated 10 months ago
- ☆62Updated last month
- Write a fast kernel and run it on Discord. See how you compare against the best!☆34Updated this week
- ☆101Updated 7 months ago
- Collection of kernels written in Triton language☆114Updated last month
- A bunch of kernels that might make stuff slower 😉☆28Updated last week
- A library for unit scaling in PyTorch☆124Updated 4 months ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆45Updated 8 months ago
- Applied AI experiments and examples for PyTorch☆250Updated last week
- PyTorch bindings for CUTLASS grouped GEMM.☆77Updated 4 months ago
- This repository contains the experimental PyTorch native float8 training UX☆222Updated 7 months ago
- Make triton easier☆47Updated 9 months ago
- ☆26Updated last year
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆104Updated this week
- ☆192Updated this week
- A minimal implementation of vllm.☆37Updated 8 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing. By pro…☆70Updated this week
- Hydragen: High-Throughput LLM Inference with Shared Prefixes☆35Updated 10 months ago
- Personal solutions to the Triton Puzzles☆18Updated 8 months ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆58Updated 2 months ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆127Updated last year
- Explore training for quantized models☆17Updated 2 months ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆68Updated 10 months ago
- ☆21Updated 3 weeks ago
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆153Updated this week