yifanzhang-pro / HLAView external linksLinks
Official Project Page for HLA: Higher-order Linear Attention (https://arxiv.org/abs/2510.27258)
☆44Jan 6, 2026Updated last month
Alternatives and similar repositories for HLA
Users that are interested in HLA are comparing it to the libraries listed below
Sorting:
- Implementation of 2-simplicial attention proposed by Clift et al. (2019) and the recent attempt to make practical in Fast and Simplex, Ro…☆46Sep 2, 2025Updated 5 months ago
- coded with and corrected by Google Anti-Gravity☆13Nov 23, 2025Updated 2 months ago
- ☆42Jan 24, 2026Updated 3 weeks ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆42Dec 29, 2025Updated last month
- Official code for the NeurIPS25 paper "RAT: Bridging RNN Efficiencyand Attention Accuracy in Language Modeling" (https://arxiv.org/abs/25…☆23Dec 10, 2025Updated 2 months ago
- Fluid Language Model Benchmarking☆26Sep 16, 2025Updated 5 months ago
- Parallel Associative Scan for Language Models☆18Jan 8, 2024Updated 2 years ago
- ☆19Dec 4, 2025Updated 2 months ago
- Efficient PScan implementation in PyTorch☆17Jan 2, 2024Updated 2 years ago
- ☆20Dec 24, 2024Updated last year
- Xmixers: A collection of SOTA efficient token/channel mixers☆28Sep 4, 2025Updated 5 months ago
- FlashTile is a CUDA Tile IR compiler that is compatible with NVIDIA's tileiras, targeting SM70 through SM121 NVIDIA GPUs.☆48Feb 6, 2026Updated last week
- ☆32May 31, 2025Updated 8 months ago
- A bunch of kernels that might make stuff slower 😉☆75Updated this week
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 6 months ago
- Official Repo for Error-Free Linear Attention is a Free Lunch: Exact Solution from Continuous-Time Dynamics☆71Jan 13, 2026Updated last month
- FlashRNN - Fast RNN Kernels with I/O Awareness☆174Oct 20, 2025Updated 3 months ago
- DeciMamba: Exploring the Length Extrapolation Potential of Mamba (ICLR 2025)☆32Apr 9, 2025Updated 10 months ago
- ☆35Feb 26, 2024Updated last year
- manipulating cointegrated pairs to achieve a market-neutral strategy that outperforms indices☆12Jan 12, 2021Updated 5 years ago
- Awesome Triton Resources☆39Apr 27, 2025Updated 9 months ago
- ☆26Dec 3, 2025Updated 2 months ago
- RL significantly the reasoning capability of Qwen2.5-1.5B-Instruct☆31Feb 23, 2025Updated 11 months ago
- ☆86Feb 10, 2026Updated last week
- CUDA implementation of hidden Markov model training and classification☆31May 3, 2025Updated 9 months ago
- ☆44Nov 1, 2025Updated 3 months ago
- JAX/Flax implementation of the Hyena Hierarchy☆34Apr 27, 2023Updated 2 years ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆129Jun 24, 2025Updated 7 months ago
- ☆10Apr 26, 2023Updated 2 years ago
- 详细双语注释版word2vec源码,well-annotated word2vec☆10Oct 3, 2021Updated 4 years ago
- Implementation of the proposed MaskBit from Bytedance AI☆83Nov 12, 2024Updated last year
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressi…☆23Oct 1, 2025Updated 4 months ago
- ☆14May 14, 2019Updated 6 years ago
- New version of mpMap☆12Jul 19, 2020Updated 5 years ago
- my first ever browser game☆10Jun 21, 2025Updated 7 months ago
- ☆12Oct 29, 2024Updated last year
- 爬取百度指数数据☆12Dec 8, 2022Updated 3 years ago
- ☆20Sep 11, 2025Updated 5 months ago
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆56Nov 20, 2024Updated last year