CLAIRE-Labo / flash_attention
A basic pure pytorch implementation of flash attention
☆16Updated 3 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for flash_attention
- ☆73Updated 4 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆113Updated 7 months ago
- ☆35Updated 7 months ago
- Triton Implementation of HyperAttention Algorithm☆46Updated 11 months ago
- Collection of autoregressive model implementation☆67Updated this week
- ☆53Updated 3 weeks ago
- ☆53Updated 10 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆95Updated 6 months ago
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆29Updated last month
- Minimal but scalable implementation of large language models in JAX☆26Updated 2 weeks ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆84Updated this week
- LL3M: Large Language and Multi-Modal Model in Jax☆65Updated 6 months ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆79Updated 9 months ago
- GoldFinch and other hybrid transformer components☆39Updated 4 months ago
- ☆62Updated 3 months ago
- ☆55Updated last month
- ☆50Updated 6 months ago
- ☆29Updated 2 months ago
- Explorations into the recently proposed Taylor Series Linear Attention☆90Updated 3 months ago
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto☆53Updated 6 months ago
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆29Updated 2 weeks ago
- Experiment of using Tangent to autodiff triton☆72Updated 9 months ago
- ☆76Updated 7 months ago
- σ-GPT: A New Approach to Autoregressive Models☆59Updated 3 months ago
- ☆45Updated 9 months ago
- Using FlexAttention to compute attention with different masking patterns☆40Updated last month
- ☆46Updated last month
- ☆48Updated last week