kyutai-labs / jax-flash-attn3Links
JAX bindings for the flash-attention3 kernels
☆16Updated last month
Alternatives and similar repositories for jax-flash-attn3
Users that are interested in jax-flash-attn3 are comparing it to the libraries listed below
Sorting:
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆19Updated last week
- FlexAttention w/ FlashAttention3 Support☆27Updated last year
- ☆32Updated last year
- DPO, but faster 🚀☆46Updated 11 months ago
- TensorRT LLM Benchmark Configuration☆13Updated last year
- ☆57Updated last year
- Repository for CPU Kernel Generation for LLM Inference☆26Updated 2 years ago
- Make triton easier☆48Updated last year
- Code and data for paper "(How) do Language Models Track State?"☆20Updated 7 months ago
- vLLM adapter for a TGIS-compatible gRPC server.☆44Updated this week
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆112Updated last year
- ☆109Updated 6 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆45Updated 5 months ago
- ☆71Updated 7 months ago
- No-GIL Python environment featuring NVIDIA Deep Learning libraries.☆68Updated 7 months ago
- ☆21Updated 8 months ago
- Implementation of Hyena Hierarchy in JAX☆10Updated 2 years ago
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Updated 4 months ago
- ☆78Updated 11 months ago
- ☆50Updated 6 months ago
- A CUDA kernel for NHWC GroupNorm for PyTorch☆21Updated last year
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressi…☆23Updated last month
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆33Updated last year
- Implementation of 2-simplicial attention proposed by Clift et al. (2019) and the recent attempt to make practical in Fast and Simplex, Ro…☆47Updated 2 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆85Updated 2 months ago
- GPTQ inference TVM kernel☆39Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- Benchmark tests supporting the TiledCUDA library.☆17Updated 11 months ago
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆25Updated this week