☆54May 20, 2024Updated last year
Alternatives and similar repositories for seq_icl
Users that are interested in seq_icl are comparing it to the libraries listed below
Sorting:
- ☆36Feb 26, 2024Updated 2 years ago
- ☆11Oct 11, 2023Updated 2 years ago
- A MAD laboratory to improve AI architecture designs 🧪☆140Dec 17, 2024Updated last year
- Understand and test language model architectures on synthetic tasks.☆262Mar 16, 2026Updated last week
- PyTorch implementation for PaLM: A Hybrid Parser and Language Model.☆10Jan 7, 2020Updated 6 years ago
- Parallel Associative Scan for Language Models☆18Jan 8, 2024Updated 2 years ago
- Code for the paper: https://arxiv.org/pdf/2309.06979.pdf☆21Jul 29, 2024Updated last year
- Here we will test various linear attention designs.☆62Apr 25, 2024Updated last year
- ☆22Dec 15, 2023Updated 2 years ago
- Official PyTorch Implementation of the Longhorn Deep State Space Model☆57Dec 4, 2024Updated last year
- ☆58Jul 9, 2024Updated last year
- Combining SOAP and MUON☆19Feb 11, 2025Updated last year
- Example of using Epochraft to train HuggingFace transformers models with PyTorch FSDP☆11Jan 29, 2024Updated 2 years ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Apr 17, 2024Updated last year
- Official code for the paper "Attention as a Hypernetwork"☆55Feb 24, 2026Updated 3 weeks ago
- HGRN2: Gated Linear RNNs with State Expansion☆56Aug 20, 2024Updated last year
- Blog post☆17Feb 16, 2024Updated 2 years ago
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- Code implementing "Efficient Parallelization of a Ubiquitious Sequential Computation" (Heinsen, 2023)☆98Dec 5, 2024Updated last year
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated 2 years ago
- Accelerated First Order Parallel Associative Scan☆196Jan 7, 2026Updated 2 months ago
- Efficient PScan implementation in PyTorch☆17Jan 2, 2024Updated 2 years ago
- ☆19Dec 4, 2025Updated 3 months ago
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- ☆20May 30, 2024Updated last year
- ☆35Apr 12, 2024Updated last year
- Official implementation of the paper "Linear Transformers with Learnable Kernel Functions are Better In-Context Models"☆169Jan 16, 2025Updated last year
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆103Jun 14, 2024Updated last year
- GoldFinch and other hybrid transformer components☆45Jul 20, 2024Updated last year
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆68Apr 24, 2024Updated last year
- ☆27Feb 26, 2026Updated 3 weeks ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆75Aug 2, 2024Updated last year
- Experiment of using Tangent to autodiff triton☆82Jan 22, 2024Updated 2 years ago
- Source-to-Source Debuggable Derivatives in Pure Python☆15Jan 23, 2024Updated 2 years ago
- AGaLiTe: Approximate Gated Linear Transformers for Online Reinforcement Learning (Published in TMLR)☆23Oct 15, 2024Updated last year
- ☆15Jul 13, 2025Updated 8 months ago
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆56Updated this week
- ☆11Apr 23, 2023Updated 2 years ago
- Experiments Notebook of "Understanding the Skill Gap in Recurrent Language Models: The Role of the Gather-and-Aggregate Mechanism"☆15Apr 30, 2025Updated 10 months ago