☆139May 29, 2025Updated 10 months ago
Alternatives and similar repositories for grouped-latent-attention
Users that are interested in grouped-latent-attention are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆22May 5, 2025Updated 11 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆277Jul 6, 2025Updated 9 months ago
- ☆51May 19, 2025Updated 11 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆133Jun 24, 2025Updated 9 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆42Dec 29, 2025Updated 3 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- ☆241Nov 19, 2025Updated 5 months ago
- ☆16Sep 22, 2024Updated last year
- The evaluation framework for training-free sparse attention in LLMs☆122Jan 27, 2026Updated 2 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆78Jun 17, 2024Updated last year
- study of cutlass☆22Nov 10, 2024Updated last year
- ☆279Jun 6, 2025Updated 10 months ago
- Distributed Compiler based on Triton for Parallel Systems☆1,411Updated this week
- ☆27Feb 26, 2026Updated last month
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Helpful tools and examples for working with flex-attention☆1,174Apr 13, 2026Updated last week
- ☆54May 20, 2024Updated last year
- [ICCV 2025] Dynamic-VLM☆28Dec 16, 2024Updated last year
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆984Feb 5, 2026Updated 2 months ago
- User-friendly implementation of the Mixture-of-Sparse-Attention (MoSA). MoSA selects distinct tokens for each head with expert choice rou…☆28May 3, 2025Updated 11 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆186Apr 8, 2026Updated last week
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- ☆64Nov 27, 2023Updated 2 years ago
- Code for the paper "Function-Space Learning Rates"☆25Jun 3, 2025Updated 10 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- ☆40Dec 14, 2025Updated 4 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆248Jun 15, 2025Updated 10 months ago
- ☆595Sep 23, 2025Updated 6 months ago
- A lightweight design for computation-communication overlap.☆226Jan 20, 2026Updated 2 months ago
- ☆15Mar 2, 2025Updated last year
- [NeurIPS 2025 Spotlight] TPA: Tensor ProducT ATTenTion Transformer (T6) (https://arxiv.org/abs/2501.06425)☆448Jan 26, 2026Updated 2 months ago
- ☆309Jul 10, 2025Updated 9 months ago
- A Quirky Assortment of CuTe Kernels☆924Apr 13, 2026Updated last week
- ☆36Feb 26, 2024Updated 2 years ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- Linear Attention Sequence Parallelism (LASP)☆88Jun 4, 2024Updated last year
- [ICML 2025] LaCache: Ladder-Shaped KV Caching for Efficient Long-Context Modeling of Large Language Models☆18Nov 4, 2025Updated 5 months ago
- ☆38Aug 7, 2025Updated 8 months ago
- VideoNSA: Native Sparse Attention Scales Video Understanding☆83Nov 16, 2025Updated 5 months ago
- ☆130Feb 4, 2026Updated 2 months ago
- Experiments Notebook of "Understanding the Skill Gap in Recurrent Language Models: The Role of the Gather-and-Aggregate Mechanism"☆15Apr 30, 2025Updated 11 months ago
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 8 months ago