[NeurIPS 2025 Spotlight] TPA: Tensor ProducT ATTenTion Transformer (T6) (https://arxiv.org/abs/2501.06425)
☆455Jan 26, 2026Updated 3 months ago
Alternatives and similar repositories for TPA
Users that are interested in TPA are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆139May 29, 2025Updated 11 months ago
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆150Feb 25, 2026Updated 2 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆42Dec 29, 2025Updated 4 months ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆344Feb 23, 2025Updated last year
- Implementation of SmoothCache, a project aimed at speeding-up Diffusion Transformer (DiT) based GenAI models with error-guided caching.☆48Jul 17, 2025Updated 9 months ago
- Open source password manager - Proton Pass • AdSecurely store, share, and autofill your credentials with Proton Pass, the end-to-end encrypted password manager trusted by millions.
- 🚀 Efficient implementations for emerging model architectures☆5,032May 1, 2026Updated last week
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆376Dec 12, 2024Updated last year
- Linear Attention for Efficient Bidirectional Sequence Modeling☆16May 13, 2025Updated 11 months ago
- ☆20Aug 14, 2025Updated 8 months ago
- ☆70Jul 8, 2025Updated 10 months ago
- Muon is Scalable for LLM Training☆1,473Aug 3, 2025Updated 9 months ago
- [ICLR 2026] RPG: KL-Regularized Policy Gradient (https://arxiv.org/abs/2505.17508)☆74Apr 29, 2026Updated last week
- [ACL Findings 2026] Official Implementation of "FastKV: Decoupling of Context Reduction and KV Cache Compression for Prefill-Decoding Acc…☆31Apr 14, 2026Updated 3 weeks ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆543Feb 10, 2025Updated last year
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆156Apr 7, 2025Updated last year
- User-friendly implementation of the Mixture-of-Sparse-Attention (MoSA). MoSA selects distinct tokens for each head with expert choice rou…☆29May 3, 2025Updated last year
- ☆63Oct 3, 2024Updated last year
- Efficient LLM Inference over Long Sequences☆394Jun 25, 2025Updated 10 months ago
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆30Nov 12, 2024Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆835Mar 6, 2025Updated last year
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆92Feb 14, 2025Updated last year
- Source code for the paper "Positional Attention: Expressivity and Learnability of Algorithmic Computation"☆14May 26, 2025Updated 11 months ago
- Ring attention implementation with flash attention☆1,015Sep 10, 2025Updated 7 months ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Clustered Compositional Embeddings☆13Oct 25, 2023Updated 2 years ago
- (ACL2025 oral) SCOPE: Optimizing KV Cache Compression in Long-context Generation☆35May 28, 2025Updated 11 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆277Jul 6, 2025Updated 10 months ago
- Code release for DynamicTanh (DyT)☆1,038Mar 30, 2025Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆57Aug 20, 2024Updated last year
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆285Oct 28, 2025Updated 6 months ago
- ☆130Feb 4, 2026Updated 3 months ago
- ☆118Jul 23, 2025Updated 9 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆50Oct 18, 2024Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- [ICLR 2025 Oral] Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models☆999Jul 10, 2025Updated 9 months ago
- [ICLR 2026] When it comes to optimizers, it's always better to be safe than sorry☆413Sep 26, 2025Updated 7 months ago
- [CVPR2025] Breaking the Low-Rank Dilemma of Linear Attention☆41Mar 11, 2025Updated last year
- ☆19Jan 10, 2025Updated last year
- The original Shared Recurrent Memory Transformer implementation☆36Jul 11, 2025Updated 9 months ago
- Efficient Triton Kernels for LLM Training☆6,331Apr 30, 2026Updated last week
- Combining SOAP and MUON☆20Feb 11, 2025Updated last year