graphcore-research / unit-scalingLinks
A library for unit scaling in PyTorch
☆129Updated last month
Alternatives and similar repositories for unit-scaling
Users that are interested in unit-scaling are comparing it to the libraries listed below
Sorting:
- Experiment of using Tangent to autodiff triton☆80Updated last year
- ☆118Updated last year
- This repository contains the experimental PyTorch native float8 training UX☆224Updated last year
- Accelerated First Order Parallel Associative Scan☆187Updated last year
- ☆87Updated last year
- supporting pytorch FSDP for optimizers☆84Updated 8 months ago
- JAX bindings for Flash Attention v2☆90Updated 3 weeks ago
- A simple library for scaling up JAX programs☆143Updated 9 months ago
- ☆148Updated 2 years ago
- Implementation of a Transformer, but completely in Triton☆273Updated 3 years ago
- ☆233Updated 6 months ago
- 🧱 Modula software package☆222Updated 3 weeks ago
- Implementation of Flash Attention in Jax☆216Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆240Updated 2 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆124Updated 8 months ago
- Efficient optimizers☆254Updated 2 weeks ago
- seqax = sequence modeling + JAX☆166Updated last month
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- LoRA for arbitrary JAX models and functions☆141Updated last year
- ☆324Updated 3 weeks ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆130Updated last year
- ☆188Updated 3 weeks ago
- JAX implementation of the Llama 2 model☆219Updated last year
- FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores☆327Updated 7 months ago
- If it quacks like a tensor...☆58Updated 9 months ago
- ☆56Updated 10 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆233Updated 8 months ago
- Understand and test language model architectures on synthetic tasks.☆221Updated last month
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆152Updated last month
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆260Updated 3 weeks ago