test-time-training / ttt-lm-kernels
Inference Speed Benchmark for Learning to (Learn at Test Time): RNNs with Expressive Hidden States
☆51Updated 6 months ago
Alternatives and similar repositories for ttt-lm-kernels:
Users that are interested in ttt-lm-kernels are comparing it to the libraries listed below
- ☆98Updated 10 months ago
- A repository for DenseSSMs☆87Updated 9 months ago
- Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Rule☆73Updated 2 weeks ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆48Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆52Updated 4 months ago
- Stick-breaking attention☆41Updated this week
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆80Updated last year
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆58Updated last year
- Linear Attention Sequence Parallelism (LASP)☆68Updated 7 months ago
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆51Updated 2 months ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆28Updated 7 months ago
- Here we will test various linear attention designs.☆58Updated 8 months ago
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆92Updated 4 months ago
- ☆24Updated 3 months ago
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆62Updated 8 months ago
- Triton implement of bi-directional (non-causal) linear attention☆33Updated this week
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆96Updated 3 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆60Updated 9 months ago
- [EMNLP 2023 Main] Sparse Low-rank Adaptation of Pre-trained Language Models☆70Updated 10 months ago
- Fast and memory-efficient exact attention☆52Updated last month
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆33Updated 3 months ago
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆99Updated 7 months ago
- Official code for the paper "Attention as a Hypernetwork"☆23Updated 6 months ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆119Updated this week
- Some preliminary explorations of Mamba's context scaling.☆206Updated 11 months ago
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆118Updated 5 months ago
- Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆52Updated 3 weeks ago
- A Closer Look into Mixture-of-Experts in Large Language Models☆41Updated 5 months ago
- PyTorch implementation of "From Sparse to Soft Mixtures of Experts"☆49Updated last year
- Code for paper "Patch-Level Training for Large Language Models"☆75Updated 2 months ago