Mamba support for transformer lens
☆19Sep 17, 2024Updated last year
Alternatives and similar repositories for MambaLens
Users that are interested in MambaLens are comparing it to the libraries listed below
Sorting:
- Unofficial implementation of paper : Exploring the Space of Key-Value-Query Models with Intention☆12May 24, 2023Updated 2 years ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated last year
- A 20M RWKV v6 can do nonogram☆14Oct 18, 2024Updated last year
- A tiny easily hackable implementation of a feature dashboard.☆15Oct 21, 2025Updated 4 months ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- An Attention Superoptimizer☆22Jan 20, 2025Updated last year
- ☆16Nov 28, 2024Updated last year
- [ICML 2023] "Data Efficient Neural Scaling Law via Model Reusing" by Peihao Wang, Rameswar Panda, Zhangyang Wang☆14Jan 4, 2024Updated 2 years ago
- Expanding linear RNN state-transition matrix eigenvalues to include negatives improves state-tracking tasks and language modeling without…☆21Mar 15, 2025Updated 11 months ago
- Official code for the paper "Attention as a Hypernetwork"☆51Updated this week
- Code for my ICLR 2024 TinyPapers paper "Prune and Tune: Improving Efficient Pruning Techniques for Massive Language Models"☆16May 26, 2023Updated 2 years ago
- ☆20May 30, 2024Updated last year
- ☆17Updated this week
- Minimum Description Length probing for neural network representations☆20Jan 28, 2025Updated last year
- PyTorch and NNsight implementation of AtP* (Kramar et al 2024, DeepMind)☆20Jan 19, 2025Updated last year
- Einsum with einops style variable names☆18May 16, 2024Updated last year
- ☆66Jul 8, 2025Updated 7 months ago
- PyCUDA based PyTorch Extension Made Easy☆27Mar 22, 2024Updated last year
- [ICML 24 NGSM workshop] Associative Recurrent Memory Transformer implementation and scripts for training and evaluation☆61Updated this week
- ☆24Sep 25, 2024Updated last year
- Sparse Autoencoder Training Library☆55May 1, 2025Updated 10 months ago
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆57Jul 23, 2024Updated last year
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- ☆30Aug 2, 2024Updated last year
- A toolkit for scaling law research ⚖☆57Jan 27, 2025Updated last year
- ☆12Sep 30, 2018Updated 7 years ago
- Checkpointable dataset utilities for foundation model training☆32Jan 29, 2024Updated 2 years ago
- ☆50Jun 16, 2025Updated 8 months ago
- Official Repo for Error-Free Linear Attention is a Free Lunch: Exact Solution from Continuous-Time Dynamics☆71Jan 13, 2026Updated last month
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Apr 17, 2024Updated last year
- ☆33Oct 4, 2024Updated last year
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆30Jan 28, 2026Updated last month
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆58Feb 9, 2026Updated 3 weeks ago
- ☆35Apr 8, 2025Updated 10 months ago
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- Official implementation of the paper "You Do Not Fully Utilize Transformer's Representation Capacity"☆32May 28, 2025Updated 9 months ago
- DropIT: Dropping Intermediate Tensors for Memory-Efficient DNN Training (ICLR 2023)☆32Apr 8, 2023Updated 2 years ago
- Sparse Backpropagation for Mixture-of-Expert Training☆29Jul 2, 2024Updated last year
- A library for efficient patching and automatic circuit discovery.☆90Dec 31, 2025Updated 2 months ago