☆110Feb 25, 2025Updated last year
Alternatives and similar repositories for linear-attention-and-beyond-slides
Users that are interested in linear-attention-and-beyond-slides are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- PyTorch implementation of the Flash Spectral Transform Unit.☆22Sep 19, 2024Updated last year
- ☆36Mar 7, 2025Updated last year
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆146Feb 25, 2026Updated last month
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆978Feb 5, 2026Updated last month
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆119May 19, 2025Updated 10 months ago
- [NIPS 2025] Mixing Expert Knowledge: Bring Human Thoughts Back to The Game of Go. Our model is originally named InternThinker-Go, and cal…☆23Jan 26, 2026Updated 2 months ago
- Triton implement of bi-directional (non-causal) linear attention☆73Mar 1, 2026Updated 3 weeks ago
- Source-to-Source Debuggable Derivatives in Pure Python☆15Jan 23, 2024Updated 2 years ago
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,692Updated this week
- Welcome to the 'In Context Learning Theory' Reading Group☆30Nov 8, 2024Updated last year
- Measuring the Signal to Noise Ratio in Language Model Evaluation☆29Aug 19, 2025Updated 7 months ago
- ☆133Jun 6, 2025Updated 9 months ago
- Awesome Triton Resources☆39Apr 27, 2025Updated 11 months ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- ☆65Apr 26, 2025Updated 11 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆262Aug 9, 2025Updated 7 months ago
- ☆13Jun 18, 2024Updated last year
- ☆13May 12, 2025Updated 10 months ago
- [ICLR 2025 Spotlight] Code release for "Sharpness-Aware Minimization Efficiently Selects Flatter Minima Late In Training"☆18Feb 20, 2025Updated last year
- ☆19Nov 4, 2025Updated 4 months ago
- Codes accompanying the paper "Toward Guidance-Free AR Visual Generation via Condition Contrastive Alignment"☆36Feb 11, 2025Updated last year
- ☆52May 19, 2025Updated 10 months ago
- ☆68Jul 8, 2025Updated 8 months ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- Official Implementation of ACL2023: Don't Parse, Choose Spans! Continuous and Discontinuous Constituency Parsing via Autoregressive Span …☆14Aug 25, 2023Updated 2 years ago
- ☆21Mar 3, 2025Updated last year
- ☆38Aug 7, 2025Updated 7 months ago
- 🔥 A minimal training framework for scaling FLA models☆359Nov 15, 2025Updated 4 months ago
- ☆54Jun 6, 2024Updated last year
- ☆97Mar 26, 2025Updated last year
- [ICML 2025] Fast and Low-Cost Genomic Foundation Models via Outlier Removal.☆18Jun 19, 2025Updated 9 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆274Jul 6, 2025Updated 8 months ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- ☆32Jul 2, 2025Updated 8 months ago
- A sparse attention kernel supporting mix sparse patterns☆485Jan 18, 2026Updated 2 months ago
- Flash-Linear-Attention models beyond language☆21Aug 28, 2025Updated 7 months ago
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- PyTorch compilation tutorial covering TorchScript, torch.fx, and Slapo☆17Mar 13, 2023Updated 3 years ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆132Jun 24, 2025Updated 9 months ago
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressi…☆23Oct 1, 2025Updated 5 months ago