Phylliida / MambaLensLinks
Mamba support for transformer lens
☆17Updated 9 months ago
Alternatives and similar repositories for MambaLens
Users that are interested in MambaLens are comparing it to the libraries listed below
Sorting:
- ☆53Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆74Updated 7 months ago
- ☆65Updated last year
- Sparse Autoencoder Training Library☆52Updated last month
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆75Updated 6 months ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- ☆32Updated last year
- Universal Neurons in GPT2 Language Models☆29Updated last year
- This repo is based on https://github.com/jiaweizzhao/GaLore☆28Updated 9 months ago
- ☆79Updated 10 months ago
- ☆20Updated 11 months ago
- ☆32Updated 5 months ago
- A library for efficient patching and automatic circuit discovery.☆67Updated 2 months ago
- Stick-breaking attention☆57Updated last week
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆27Updated last year
- ☆48Updated last year
- This repository contains the code used for the experiments in the paper "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity…☆27Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆95Updated 3 weeks ago
- ☆85Updated 10 months ago
- The official implementation for Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free☆44Updated last month
- Official Code Repository for the paper "Key-value memory in the brain"☆26Updated 4 months ago
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆23Updated last month
- Official implementation of the transformer (TF) architecture suggested in a paper entitled "Looped Transformers as Programmable Computers…☆27Updated 2 years ago
- ☆19Updated 3 months ago
- ☆14Updated last year
- ☆96Updated 9 months ago
- Official implementation of "BERTs are Generative In-Context Learners"☆28Updated 3 months ago
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆56Updated 3 months ago
- ☆34Updated last year
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆32Updated 3 months ago