☆279Jun 6, 2025Updated 11 months ago
Alternatives and similar repositories for log-linear-attention
Users that are interested in log-linear-attention are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆244Nov 19, 2025Updated 5 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆133Jun 24, 2025Updated 10 months ago
- ☆22May 5, 2025Updated last year
- ☆139May 29, 2025Updated 11 months ago
- Official repository Flash Local Linear Attention☆23Apr 23, 2026Updated last week
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- ☆140Aug 18, 2025Updated 8 months ago
- 🚀 Efficient implementations for emerging model architectures☆5,032Updated this week
- ☆19Dec 4, 2025Updated 5 months ago
- ☆36Mar 7, 2025Updated last year
- Experiments Notebook of "Understanding the Skill Gap in Recurrent Language Models: The Role of the Gather-and-Aggregate Mechanism"☆15Apr 30, 2025Updated last year
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆993Feb 5, 2026Updated 3 months ago
- ☆45Nov 1, 2025Updated 6 months ago
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆248Jun 15, 2025Updated 10 months ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- ☆119May 19, 2025Updated 11 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆795Apr 21, 2026Updated 2 weeks ago
- Stick-breaking attention☆63Jul 1, 2025Updated 10 months ago
- Code for the paper "Function-Space Learning Rates"☆24Jun 3, 2025Updated 11 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆92Oct 30, 2024Updated last year
- Distributed Compiler based on Triton for Parallel Systems☆1,420Apr 22, 2026Updated 2 weeks ago
- Helpful tools and examples for working with flex-attention☆1,182Apr 13, 2026Updated 3 weeks ago
- Combining SOAP and MUON☆20Feb 11, 2025Updated last year
- FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA.☆276Apr 29, 2026Updated last week
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆25Jun 6, 2024Updated last year
- ☆54May 20, 2024Updated last year
- TileGraph is an experimental DNN compiler that utilizes static code generation and kernel fusion techniques.☆11Sep 18, 2024Updated last year
- My attempt to improve the speed of the newton schulz algorithm, starting from the dion implementation.☆36Updated this week
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆171Jan 30, 2025Updated last year
- A sparse attention kernel supporting mix sparse patterns☆503Jan 18, 2026Updated 3 months ago
- ☆265Jul 11, 2024Updated last year
- Code release for paper "Test-Time Training Done Right"☆462Jan 5, 2026Updated 4 months ago
- A bunch of kernels that might make stuff slower 😉☆88Apr 24, 2026Updated last week
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- ☆73Feb 27, 2026Updated 2 months ago
- Kernels, of the mega variety :)☆717Updated this week
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆112Oct 11, 2025Updated 6 months ago
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆5,928Updated this week
- ☆66Apr 26, 2025Updated last year
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆390Nov 20, 2025Updated 5 months ago
- [NeurIPS 2025] Official implementation for our paper "Scaling Diffusion Transformers Efficiently via μP".☆98Nov 2, 2025Updated 6 months ago