kyutai-labs / jax-flash-attn3Links
JAX bindings for the flash-attention3 kernels
☆19Updated 2 weeks ago
Alternatives and similar repositories for jax-flash-attn3
Users that are interested in jax-flash-attn3 are comparing it to the libraries listed below
Sorting:
- FlexAttention w/ FlashAttention3 Support☆27Updated last year
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆19Updated this week
- ☆32Updated last year
- vLLM adapter for a TGIS-compatible gRPC server.☆47Updated this week
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆29Updated this week
- ☆78Updated last year
- Implementation of the LDP module block in PyTorch and Zeta from the paper: "MobileVLM: A Fast, Strong and Open Vision Language Assistant …☆15Updated last year
- DPO, but faster 🚀☆46Updated last year
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆29Updated 2 weeks ago
- Make triton easier☆50Updated last year
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆21Updated last month
- Code and data for paper "(How) do Language Models Track State?"☆21Updated 9 months ago
- [WIP] Better (FP8) attention for Hopper☆32Updated 10 months ago
- Multi-Layer Key-Value sharing experiments on Pythia models☆34Updated last year
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆33Updated last year
- mHC kernels implemented in CUDA☆217Updated this week
- ☆16Updated last year
- ☆41Updated 3 months ago
- JAX Scalify: end-to-end scaled arithmetics☆18Updated last year
- ☆61Updated 2 years ago
- PyTorch implementation of the Flash Spectral Transform Unit.☆21Updated last year
- ☆117Updated 8 months ago
- Implementation of Hyena Hierarchy in JAX☆10Updated 2 years ago
- Repository for CPU Kernel Generation for LLM Inference☆27Updated 2 years ago
- GoldFinch and other hybrid transformer components☆45Updated last year
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Updated 6 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Updated 7 months ago
- Experimental scripts for researching data adaptive learning rate scheduling.☆22Updated 2 years ago
- Implementation of 2-simplicial attention proposed by Clift et al. (2019) and the recent attempt to make practical in Fast and Simplex, Ro…☆46Updated 4 months ago
- A CUDA kernel for NHWC GroupNorm for PyTorch☆22Updated last year