amorehead / jvp_flash_attentionLinks
Flash Attention Triton kernel with support for second-order derivatives
☆121Updated this week
Alternatives and similar repositories for jvp_flash_attention
Users that are interested in jvp_flash_attention are comparing it to the libraries listed below
Sorting:
- Official implementation of the paper: "ZClip: Adaptive Spike Mitigation for LLM Pre-Training".☆141Updated last month
- Official Jax Implementation of MD4 Masked Diffusion Models☆149Updated 9 months ago
- Code accompanying the paper "Generalized Interpolating Discrete Diffusion"☆110Updated 6 months ago
- ☆259Updated 6 months ago
- Implementation of a multimodal diffusion transformer in Pytorch☆107Updated last year
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆134Updated last month
- A general framework for inference-time scaling and steering of diffusion models with arbitrary rewards.☆201Updated 5 months ago
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆56Updated 9 months ago
- Explorations into the recently proposed Taylor Series Linear Attention☆100Updated last year
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆94Updated 6 months ago
- Implementation of the dynamic chunking mechanism in H-net by Hwang et al. of Carnegie Mellon☆65Updated 4 months ago
- Official PyTorch implementation and models for paper "Diffusion Beats Autoregressive in Data-Constrained Settings". We find diffusion mod…☆115Updated last month
- Implementation of the proposed MaskBit from Bytedance AI☆83Updated last year
- Official Code for Paper "Think While You Generate: Discrete Diffusion with Planned Denoising" [ICLR 2025]☆84Updated 7 months ago
- Implementation of 2-simplicial attention proposed by Clift et al. (2019) and the recent attempt to make practical in Fast and Simplex, Ro…☆46Updated 3 months ago
- ☆33Updated 11 months ago
- The official github repo for "Diffusion Language Models are Super Data Learners".☆212Updated last month
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆294Updated 6 months ago
- ☆170Updated 2 months ago
- Explorations into whether a transformer with RL can direct a genetic algorithm to converge faster☆71Updated 7 months ago
- Implementation of the proposed Spline-Based Transformer from Disney Research☆105Updated last year
- Self contained pytorch implementation of a sinkhorn based router, for mixture of experts or otherwise☆39Updated last year
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto☆57Updated last year
- Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models. TMLR 2025.☆129Updated 3 months ago
- [ICML 2025] Roll the dice & look before you leap: Going beyond the creative limits of next-token prediction☆80Updated 6 months ago
- Supporting code for the blog post on modular manifolds.☆105Updated 2 months ago
- Focused on fast experimentation and simplicity☆76Updated 11 months ago
- The Gaussian Histogram Loss (HL-Gauss) proposed by Imani et al. with a few convenient wrappers for regression, in Pytorch☆68Updated last month
- Explorations into improving ViTArc with Slot Attention☆43Updated last year
- FlashRNN - Fast RNN Kernels with I/O Awareness☆173Updated 2 months ago