tdooms / bilinear-decompositionLinks
Official repo for the paper "Weight-based Decomposition: A Case for Bilinear MLPs"
☆22Updated last month
Alternatives and similar repositories for bilinear-decomposition
Users that are interested in bilinear-decomposition are comparing it to the libraries listed below
Sorting:
- Implementation of the BatchTopK activation function for training sparse autoencoders (SAEs)☆47Updated 2 months ago
- ☆29Updated 7 months ago
- Sparse and discrete interpretability tool for neural networks☆63Updated last year
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆80Updated 9 months ago
- ☆20Updated last year
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆39Updated 2 years ago
- Universal Neurons in GPT2 Language Models☆30Updated last year
- Sparse Autoencoder Training Library☆54Updated 4 months ago
- ☆106Updated 7 months ago
- ☆58Updated 11 months ago
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated last year
- ☆53Updated last year
- ☆107Updated 2 years ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆67Updated last year
- Personal implementation of ASIF by Antonio Norelli☆25Updated last year
- Replicating and dissecting the git-re-basin project in one-click-replication Colabs☆36Updated 3 years ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆84Updated 10 months ago
- This repository contains the code used for the experiments in the paper "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity…☆28Updated last year
- ☆33Updated 8 months ago
- PyTorch library for Active Fine-Tuning☆91Updated 2 weeks ago
- PyTorch and NNsight implementation of AtP* (Kramar et al 2024, DeepMind)☆19Updated 8 months ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆92Updated 10 months ago
- ☆15Updated last year
- ☆23Updated 7 months ago
- ☆85Updated last year
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆72Updated 3 months ago
- Language models scale reliably with over-training and on downstream tasks☆99Updated last year
- ☆45Updated 8 months ago
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆39Updated 10 months ago
- [ICML 2025] Roll the dice & look before you leap: Going beyond the creative limits of next-token prediction☆70Updated 4 months ago