[NeurIPS 2025 Spotlight] TPA: Tensor ProducT ATTenTion Transformer (T6) (https://arxiv.org/abs/2501.06425)
☆448Jan 26, 2026Updated 2 months ago
Alternatives and similar repositories for TPA
Users that are interested in TPA are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆139May 29, 2025Updated 10 months ago
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆150Feb 25, 2026Updated last month
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆42Dec 29, 2025Updated 3 months ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆343Feb 23, 2025Updated last year
- Implementation of SmoothCache, a project aimed at speeding-up Diffusion Transformer (DiT) based GenAI models with error-guided caching.☆48Jul 17, 2025Updated 8 months ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- [ICLR 2026] RPG: KL-Regularized Policy Gradient (https://arxiv.org/abs/2505.17508)☆65Mar 30, 2026Updated 2 weeks ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆375Dec 12, 2024Updated last year
- Linear Attention for Efficient Bidirectional Sequence Modeling☆16May 13, 2025Updated 11 months ago
- 🚀 Efficient implementations for emerging model architectures☆4,878Updated this week
- ☆20Aug 14, 2025Updated 8 months ago
- ☆69Jul 8, 2025Updated 9 months ago
- Muon is Scalable for LLM Training☆1,453Aug 3, 2025Updated 8 months ago
- Official Implementation of FastKV: Decoupling of Context Reduction and KV Cache Compression for Prefill-Decoding Acceleration☆30Apr 7, 2026Updated last week
- User-friendly implementation of the Mixture-of-Sparse-Attention (MoSA). MoSA selects distinct tokens for each head with expert choice rou…☆28May 3, 2025Updated 11 months ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Apr 7, 2025Updated last year
- ☆63Oct 3, 2024Updated last year
- Efficient LLM Inference over Long Sequences☆393Jun 25, 2025Updated 9 months ago
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆30Nov 12, 2024Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆826Mar 6, 2025Updated last year
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆92Feb 14, 2025Updated last year
- Source code for the paper "Positional Attention: Expressivity and Learnability of Algorithmic Computation"☆14May 26, 2025Updated 10 months ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆536Feb 10, 2025Updated last year
- Clustered Compositional Embeddings☆12Oct 25, 2023Updated 2 years ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- (ACL2025 oral) SCOPE: Optimizing KV Cache Compression in Long-context Generation☆34May 28, 2025Updated 10 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆276Jul 6, 2025Updated 9 months ago
- ☆129Feb 4, 2026Updated 2 months ago
- Code release for DynamicTanh (DyT)☆1,035Mar 30, 2025Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆57Aug 20, 2024Updated last year
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆283Oct 28, 2025Updated 5 months ago
- The original Shared Recurrent Memory Transformer implementation☆33Jul 11, 2025Updated 9 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆50Oct 18, 2024Updated last year
- Ring attention implementation with flash attention☆1,006Sep 10, 2025Updated 7 months ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- [ICLR 2025 Oral] Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models☆992Jul 10, 2025Updated 9 months ago
- [ICLR 2026] When it comes to optimizers, it's always better to be safe than sorry☆412Sep 26, 2025Updated 6 months ago
- ☆19Jan 10, 2025Updated last year
- [CVPR2025] Breaking the Low-Rank Dilemma of Linear Attention☆41Mar 11, 2025Updated last year
- Efficient Triton Kernels for LLM Training☆6,265Apr 8, 2026Updated last week
- Combining SOAP and MUON☆19Feb 11, 2025Updated last year
- Code for the paper "Function-Space Learning Rates"☆25Jun 3, 2025Updated 10 months ago