Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"
☆251Jan 31, 2025Updated last year
Alternatives and similar repositories for lolcats
Users that are interested in lolcats are comparing it to the libraries listed below
Sorting:
- ☆66Jul 8, 2025Updated 7 months ago
- Code for the paper: https://arxiv.org/pdf/2309.06979.pdf☆21Jul 29, 2024Updated last year
- ☆14Nov 20, 2022Updated 3 years ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Jun 12, 2024Updated last year
- Understand and test language model architectures on synthetic tasks.☆257Feb 24, 2026Updated last week
- train with kittens!☆63Oct 25, 2024Updated last year
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- Long Context Extension and Generalization in LLMs☆63Sep 21, 2024Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆248Jun 6, 2025Updated 9 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆56Aug 20, 2024Updated last year
- Awesome Triton Resources☆39Apr 27, 2025Updated 10 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆110Oct 11, 2025Updated 4 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆156Apr 7, 2025Updated 11 months ago
- Code for the paper "Cottention: Linear Transformers With Cosine Attention"☆20Nov 15, 2025Updated 3 months ago
- Training hybrid models for dummies.☆29Nov 1, 2025Updated 4 months ago
- Some preliminary explorations of Mamba's context scaling.☆218Feb 8, 2024Updated 2 years ago
- A repository for research on medium sized language models.☆78May 23, 2024Updated last year
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆238Oct 14, 2025Updated 4 months ago
- ☆29Jul 9, 2024Updated last year
- 🔥 A minimal training framework for scaling FLA models☆352Nov 15, 2025Updated 3 months ago
- ☆20May 30, 2024Updated last year
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆373Dec 12, 2024Updated last year
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,474Updated this week
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆170Jan 30, 2025Updated last year
- Tile primitives for speedy kernels☆3,202Feb 24, 2026Updated last week
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Sep 18, 2025Updated 5 months ago
- PyTorch implementation of models from the Zamba2 series.☆187Jan 23, 2025Updated last year
- ☆123Feb 4, 2026Updated last month
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆527Feb 10, 2025Updated last year
- [EMNLP 2023] Official implementation of the algorithm ETSC: Exact Toeplitz-to-SSM Conversion our EMNLP 2023 paper - Accelerating Toeplitz…☆14Oct 17, 2023Updated 2 years ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆133Dec 3, 2024Updated last year
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Rule☆477Feb 17, 2026Updated 2 weeks ago
- Make triton easier☆50Jun 12, 2024Updated last year
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆163Apr 13, 2025Updated 10 months ago
- Efficient LLM Inference over Long Sequences☆393Jun 25, 2025Updated 8 months ago
- [ICML 24 NGSM workshop] Associative Recurrent Memory Transformer implementation and scripts for training and evaluation☆62Updated this week
- Mamba support for transformer lens☆19Sep 17, 2024Updated last year