graphcore-research / unit-scalingLinks
A library for unit scaling in PyTorch
☆125Updated 6 months ago
Alternatives and similar repositories for unit-scaling
Users that are interested in unit-scaling are comparing it to the libraries listed below
Sorting:
- This repository contains the experimental PyTorch native float8 training UX☆222Updated 10 months ago
- Experiment of using Tangent to autodiff triton☆78Updated last year
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆44Updated 10 months ago
- ☆108Updated last year
- ☆78Updated 10 months ago
- ☆156Updated last year
- Accelerated First Order Parallel Associative Scan☆180Updated 9 months ago
- seqax = sequence modeling + JAX☆155Updated last month
- Triton-based implementation of Sparse Mixture of Experts.☆216Updated 6 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆127Updated this week
- supporting pytorch FSDP for optimizers☆79Updated 5 months ago
- extensible collectives library in triton☆87Updated 2 months ago
- ☆228Updated 3 months ago
- Applied AI experiments and examples for PyTorch☆270Updated this week
- Load compute kernels from the Hub☆139Updated this week
- Implementation of a Transformer, but completely in Triton☆265Updated 3 years ago
- ☆144Updated 2 years ago
- JAX bindings for Flash Attention v2☆88Updated 10 months ago
- A simple library for scaling up JAX programs☆136Updated 7 months ago
- LoRA for arbitrary JAX models and functions☆135Updated last year
- ☆53Updated this week
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆62Updated 4 months ago
- Implementation of Flash Attention in Jax☆212Updated last year
- A MAD laboratory to improve AI architecture designs 🧪☆116Updated 5 months ago
- A bunch of kernels that might make stuff slower 😉☆46Updated this week
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆126Updated 3 weeks ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆115Updated this week
- ☆210Updated this week
- ring-attention experiments