proger / accelerated-scan
Accelerated First Order Parallel Associative Scan
☆163Updated 3 months ago
Related projects ⓘ
Alternatives and complementary repositories for accelerated-scan
- Understand and test language model architectures on synthetic tasks.☆162Updated 6 months ago
- A library for unit scaling in PyTorch☆105Updated 2 weeks ago
- Experiment of using Tangent to autodiff triton☆72Updated 9 months ago
- seqax = sequence modeling + JAX☆133Updated 4 months ago
- ☆197Updated 4 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆95Updated 6 months ago
- ☆132Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆214Updated 3 months ago
- ☆128Updated this week
- JAX bindings for Flash Attention v2☆79Updated 4 months ago
- Efficient optimizers☆79Updated this week
- This repository contains the experimental PyTorch native float8 training UX☆211Updated 3 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆185Updated last month
- A simple library for scaling up JAX programs☆127Updated 2 weeks ago
- ☆73Updated 4 months ago
- LoRA for arbitrary JAX models and functions☆132Updated 8 months ago
- Scalable neural net training via automatic normalization in the modular norm.☆121Updated 3 months ago
- ☆267Updated this week
- ☆46Updated last month
- ☆77Updated 5 months ago
- FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores☆281Updated last month
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆84Updated last week
- Griffin MQA + Hawk Linear RNN Hybrid☆85Updated 6 months ago
- ☆36Updated 10 months ago
- ☆207Updated 6 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆113Updated 7 months ago
- Scalable and Performant Data Loading☆66Updated this week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆483Updated 3 weeks ago
- Some preliminary explorations of Mamba's context scaling.☆191Updated 9 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆112Updated 2 months ago