Doraemonzzz / xmixersView external linksLinks
Xmixers: A collection of SOTA efficient token/channel mixers
☆28Sep 4, 2025Updated 5 months ago
Alternatives and similar repositories for xmixers
Users that are interested in xmixers are comparing it to the libraries listed below
Sorting:
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Apr 17, 2024Updated last year
- Official Code Repository for the paper "Key-value memory in the brain"☆31Feb 25, 2025Updated 11 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆56Aug 20, 2024Updated last year
- Efficient PScan implementation in PyTorch☆17Jan 2, 2024Updated 2 years ago
- Here we will test various linear attention designs.☆62Apr 25, 2024Updated last year
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆54Jan 12, 2026Updated last month
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressi…☆23Oct 1, 2025Updated 4 months ago
- ☆24Sep 25, 2024Updated last year
- Official Implementation of ACL2023: Don't Parse, Choose Spans! Continuous and Discontinuous Constituency Parsing via Autoregressive Span …☆14Aug 25, 2023Updated 2 years ago
- ☆13Jan 7, 2025Updated last year
- PyTorch implementation for PaLM: A Hybrid Parser and Language Model.☆10Jan 7, 2020Updated 6 years ago
- Advanced Formal Language Theory (263-5352-00L; Frühjahr 2023)☆10Feb 21, 2023Updated 2 years ago
- ☆11Oct 11, 2023Updated 2 years ago
- High performance RMSNorm Implement by using SM Core Storage(Registers and Shared Memory)☆26Jan 22, 2026Updated 3 weeks ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated last year
- ☆14Jul 12, 2022Updated 3 years ago
- ☆52May 19, 2025Updated 8 months ago
- ☆16Dec 9, 2023Updated 2 years ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆235Jun 15, 2025Updated 8 months ago
- ☆31Jul 2, 2023Updated 2 years ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆42Dec 29, 2025Updated last month
- [WIP] Better (FP8) attention for Hopper☆32Feb 24, 2025Updated 11 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆22Feb 9, 2026Updated last week
- ☆86Feb 10, 2026Updated last week
- ☆223Nov 19, 2025Updated 2 months ago
- Source-to-Source Debuggable Derivatives in Pure Python☆15Jan 23, 2024Updated 2 years ago
- Parallel Associative Scan for Language Models☆18Jan 8, 2024Updated 2 years ago
- Code for the paper "Cottention: Linear Transformers With Cosine Attention"☆20Nov 15, 2025Updated 3 months ago
- STABILIZING GRADIENTS FOR DEEP NEURAL NETWORKS VIA EFFICIENT SVD PARAMETERIZATION☆16Jun 5, 2018Updated 7 years ago
- ☆14Nov 20, 2022Updated 3 years ago
- ☆106Mar 9, 2024Updated last year
- ☆35Nov 22, 2024Updated last year
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Jul 4, 2025Updated 7 months ago
- ☆20May 30, 2024Updated last year
- ☆39Apr 5, 2024Updated last year
- Official Project Page for HLA: Higher-order Linear Attention (https://arxiv.org/abs/2510.27258)☆44Jan 6, 2026Updated last month
- ☆28Oct 2, 2025Updated 4 months ago
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆56Nov 20, 2024Updated last year