HazyResearch / lolcatsView external linksLinks
Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"
☆251Jan 31, 2025Updated last year
Alternatives and similar repositories for lolcats
Users that are interested in lolcats are comparing it to the libraries listed below
Sorting:
- ☆66Jul 8, 2025Updated 7 months ago
- Code for the paper: https://arxiv.org/pdf/2309.06979.pdf☆21Jul 29, 2024Updated last year
- ☆14Nov 20, 2022Updated 3 years ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Jun 12, 2024Updated last year
- Understand and test language model architectures on synthetic tasks.☆252Jan 12, 2026Updated last month
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆248Jun 6, 2025Updated 8 months ago
- train with kittens!☆63Oct 25, 2024Updated last year
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Apr 7, 2025Updated 10 months ago
- Long Context Extension and Generalization in LLMs☆62Sep 21, 2024Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆56Aug 20, 2024Updated last year
- Awesome Triton Resources☆39Apr 27, 2025Updated 9 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆109Oct 11, 2025Updated 4 months ago
- Code for the paper "Cottention: Linear Transformers With Cosine Attention"☆20Nov 15, 2025Updated 3 months ago
- Training hybrid models for dummies.☆29Nov 1, 2025Updated 3 months ago
- Some preliminary explorations of Mamba's context scaling.☆218Feb 8, 2024Updated 2 years ago
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆236Oct 14, 2025Updated 4 months ago
- A repository for research on medium sized language models.☆77May 23, 2024Updated last year
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆371Dec 12, 2024Updated last year
- ☆29Jul 9, 2024Updated last year
- 🔥 A minimal training framework for scaling FLA models☆344Nov 15, 2025Updated 3 months ago
- ☆20May 30, 2024Updated last year
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,379Updated this week
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆169Jan 30, 2025Updated last year
- Tile primitives for speedy kernels☆3,139Updated this week
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Sep 18, 2025Updated 4 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆163Apr 13, 2025Updated 10 months ago
- PyTorch implementation of models from the Zamba2 series.☆186Jan 23, 2025Updated last year
- ☆120Feb 4, 2026Updated last week
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆524Feb 10, 2025Updated last year
- [EMNLP 2023] Official implementation of the algorithm ETSC: Exact Toeplitz-to-SSM Conversion our EMNLP 2023 paper - Accelerating Toeplitz…☆14Oct 17, 2023Updated 2 years ago
- Efficient LLM Inference over Long Sequences☆394Jun 25, 2025Updated 7 months ago
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Rule☆452Sep 15, 2025Updated 5 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆132Dec 3, 2024Updated last year
- Make triton easier☆50Jun 12, 2024Updated last year
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- XmodelLM☆38Nov 19, 2024Updated last year
- [ICML 24 NGSM workshop] Associative Recurrent Memory Transformer implementation and scripts for training and evaluation☆61Updated this week