alexzhang13 / flashattention2-custom-mask
Triton implementation of FlashAttention2 that adds Custom Masks.
☆110Updated 8 months ago
Alternatives and similar repositories for flashattention2-custom-mask:
Users that are interested in flashattention2-custom-mask are comparing it to the libraries listed below
- 🔥 A minimal training framework for scaling FLA models☆117Updated this week
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆189Updated 2 weeks ago
- PyTorch bindings for CUTLASS grouped GEMM.☆87Updated this week
- Triton-based implementation of Sparse Mixture of Experts.☆212Updated 5 months ago
- Efficient triton implementation of Native Sparse Attention.☆142Updated 3 weeks ago
- ☆143Updated last year
- Odysseus: Playground of LLM Sequence Parallelism☆69Updated 10 months ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆209Updated 8 months ago
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆116Updated last year
- A sparse attention kernel supporting mix sparse patterns☆200Updated 2 months ago
- XAttention: Block Sparse Attention with Antidiagonal Scoring☆142Updated last month
- Fast and memory-efficient exact attention☆68Updated 2 months ago
- ☆126Updated 2 months ago
- ☆69Updated 2 months ago
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆70Updated 4 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆274Updated 5 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆100Updated 2 weeks ago
- Efficient 2:4 sparse training algorithms and implementations☆54Updated 4 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimzer☆91Updated this week
- SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs☆97Updated this week
- ☆238Updated last year
- VeOmni: Scaling any Modality Model Training to any Accelerators with PyTorch native Training Framework☆306Updated 3 weeks ago
- ☆103Updated 11 months ago
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆35Updated 10 months ago
- ☆20Updated last month
- 16-fold memory access reduction with nearly no loss☆91Updated last month
- ☆77Updated 2 weeks ago
- The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆127Updated 5 months ago
- ☆80Updated 3 weeks ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆40Updated this week