axonn-ai / axonnLinks
Parallel framework for training and fine-tuning deep neural networks
☆71Updated last month
Alternatives and similar repositories for axonn
Users that are interested in axonn are comparing it to the libraries listed below
Sorting:
- extensible collectives library in triton☆91Updated 9 months ago
- Ship correct and fast LLM kernels to PyTorch☆127Updated 2 weeks ago
- Triton-based Symmetric Memory operators and examples☆72Updated 2 months ago
- ring-attention experiments☆160Updated last year
- ☆115Updated last year
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆217Updated 3 weeks ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆65Updated last week
- A bunch of kernels that might make stuff slower 😉☆72Updated this week
- Hand-Rolled GPU communications library☆76Updated last month
- ☆71Updated 9 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆135Updated 7 months ago
- Applied AI experiments and examples for PyTorch☆312Updated 4 months ago
- This repository contains the experimental PyTorch native float8 training UX☆227Updated last year
- The evaluation framework for training-free sparse attention in LLMs☆108Updated 2 months ago
- Hydragen: High-Throughput LLM Inference with Shared Prefixes☆45Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆259Updated 3 months ago
- Collection of kernels written in Triton language☆174Updated 8 months ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆153Updated 2 years ago
- ☆99Updated last year
- Experiment of using Tangent to autodiff triton☆81Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆131Updated last year
- A Python library transfers PyTorch tensors between CPU and NVMe☆123Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆91Updated 3 months ago
- JaxPP is a library for JAX that enables flexible MPMD pipeline parallelism for large-scale LLM training☆61Updated 2 weeks ago
- ☆15Updated 5 months ago
- LM engine is a library for pretraining/finetuning LLMs☆102Updated this week
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆162Updated last week
- ☆133Updated 7 months ago
- ☆269Updated this week
- train with kittens!☆63Updated last year