tdooms / bilinear-decompositionLinks
Official repo for the paper "Weight-based Decomposition: A Case for Bilinear MLPs"
☆22Updated last month
Alternatives and similar repositories for bilinear-decomposition
Users that are interested in bilinear-decomposition are comparing it to the libraries listed below
Sorting:
- Sparse and discrete interpretability tool for neural networks☆63Updated last year
- Universal Neurons in GPT2 Language Models☆30Updated last year
- Sparse Autoencoder Training Library☆54Updated 4 months ago
- ☆28Updated 6 months ago
- ☆20Updated last year
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆77Updated 9 months ago
- Personal implementation of ASIF by Antonio Norelli☆25Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆82Updated 10 months ago
- ☆23Updated 7 months ago
- ☆106Updated 2 years ago
- Implementation of the BatchTopK activation function for training sparse autoencoders (SAEs)☆45Updated last month
- ☆104Updated 6 months ago
- ☆34Updated 7 months ago
- ☆40Updated 7 months ago
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆30Updated 9 months ago
- ☆85Updated last year
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆38Updated 2 years ago
- PyTorch library for Active Fine-Tuning☆89Updated 6 months ago
- ☆53Updated last year
- ☆15Updated last year
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆70Updated 2 months ago
- Language models scale reliably with over-training and on downstream tasks☆98Updated last year
- Efficient Scaling laws and collaborative pretraining.☆17Updated 7 months ago
- ☆27Updated 2 years ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆67Updated 11 months ago
- ☆50Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated last year
- PyTorch and NNsight implementation of AtP* (Kramar et al 2024, DeepMind)☆19Updated 7 months ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆90Updated 9 months ago
- WIP☆94Updated last year