kyutai-labs / jax-flash-attn3Links
JAX bindings for the flash-attention3 kernels
☆16Updated 2 months ago
Alternatives and similar repositories for jax-flash-attn3
Users that are interested in jax-flash-attn3 are comparing it to the libraries listed below
Sorting:
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆19Updated last week
- FlexAttention w/ FlashAttention3 Support☆27Updated last year
- ☆32Updated last year
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆27Updated this week
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year
- Repository for CPU Kernel Generation for LLM Inference☆27Updated 2 years ago
- ☆78Updated last year
- Persistent dense gemm for Hopper in `CuTeDSL`☆15Updated 4 months ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆33Updated last year
- Implementation of Hyena Hierarchy in JAX☆10Updated 2 years ago
- ☆114Updated 6 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆21Updated last week
- JAX Scalify: end-to-end scaled arithmetics☆17Updated last year
- vLLM adapter for a TGIS-compatible gRPC server.☆45Updated this week
- Make triton easier☆49Updated last year
- [WIP] Better (FP8) attention for Hopper☆32Updated 9 months ago
- GPTQ inference TVM kernel☆40Updated last year
- Quantized Attention on GPU☆44Updated last year
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆29Updated last week
- ☆58Updated 2 years ago
- DPO, but faster 🚀☆46Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆112Updated last year
- Implementation of the LDP module block in PyTorch and Zeta from the paper: "MobileVLM: A Fast, Strong and Open Vision Language Assistant …☆15Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆16Updated last year
- Benchmark tests supporting the TiledCUDA library.☆18Updated last year
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Updated 4 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆45Updated 5 months ago
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressi…☆23Updated 2 months ago
- 方便扩展的Cuda算子理解和优化框架,仅用在学习使用☆18Updated last year
- High Performance FP8 GEMM Kernels for SM89 and later GPUs.☆20Updated 10 months ago