tdooms / bilinear-decompositionLinks
Official repo for the paper "Weight-based Decomposition: A Case for Bilinear MLPs"
☆21Updated 6 months ago
Alternatives and similar repositories for bilinear-decomposition
Users that are interested in bilinear-decomposition are comparing it to the libraries listed below
Sorting:
- ☆25Updated 3 months ago
- Sparse Autoencoder Training Library☆52Updated last month
- ☆19Updated 10 months ago
- Replicating and dissecting the git-re-basin project in one-click-replication Colabs☆36Updated 2 years ago
- Sparse and discrete interpretability tool for neural networks☆63Updated last year
- ☆33Updated 4 months ago
- Attribution-based Parameter Decomposition☆23Updated last week
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆37Updated 2 years ago
- This is the official repository for the "Towards Vision-Language Mechanistic Interpretability: A Causal Tracing Tool for BLIP" paper acce…☆22Updated last year
- Simple and scalable tools for data-driven pretraining data selection.☆24Updated 3 months ago
- Universal Neurons in GPT2 Language Models☆29Updated last year
- ☆23Updated 4 months ago
- Latest Weight Averaging (NeurIPS HITY 2022)☆30Updated last year
- ☆52Updated last year
- This repository contains the code used for the experiments in the paper "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity…☆26Updated last year
- Implementation of Influence Function approximations for differently sized ML models, using PyTorch☆15Updated last year
- Efficient Scaling laws and collaborative pretraining.☆16Updated 4 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆75Updated 6 months ago
- ☆32Updated 4 months ago
- ☆53Updated 8 months ago
- ☆36Updated 2 years ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆73Updated 7 months ago
- ☆46Updated 6 months ago
- Personal implementation of ASIF by Antonio Norelli☆25Updated last year
- ☆14Updated last year
- ☆26Updated 2 years ago
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆26Updated 7 months ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆66Updated 8 months ago
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆35Updated 7 months ago
- Stick-breaking attention☆56Updated 2 months ago