Phylliida / MambaLensLinks
Mamba support for transformer lens
☆16Updated 8 months ago
Alternatives and similar repositories for MambaLens
Users that are interested in MambaLens are comparing it to the libraries listed below
Sorting:
- ☆52Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- ☆61Updated last year
- ☆79Updated 9 months ago
- ☆14Updated last year
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆54Updated last month
- ☆31Updated last year
- Sparse Autoencoder Training Library☆52Updated last month
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆26Updated last year
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆62Updated 4 months ago
- Universal Neurons in GPT2 Language Models☆29Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆73Updated 7 months ago
- ☆27Updated 9 months ago
- The official implementation for Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free☆40Updated 3 weeks ago
- Stick-breaking attention☆56Updated 2 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆75Updated 6 months ago
- ☆32Updated 4 months ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆75Updated last year
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆23Updated last month
- A library for efficient patching and automatic circuit discovery.☆65Updated last month
- Experiments on the impact of depth in transformers and SSMs.☆30Updated 7 months ago
- ☆45Updated last year
- Replicating O1 inference-time scaling laws☆87Updated 6 months ago
- ☆19Updated 10 months ago
- Official implementation of the transformer (TF) architecture suggested in a paper entitled "Looped Transformers as Programmable Computers…☆27Updated 2 years ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆54Updated last year
- ☆83Updated 9 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆28Updated 8 months ago
- ☆22Updated 7 months ago
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆36Updated last year