tdooms / bilinear-decomposition
Official repo for the paper "Weight-based Decomposition: A Case for Bilinear MLPs"
☆20Updated 4 months ago
Alternatives and similar repositories for bilinear-decomposition:
Users that are interested in bilinear-decomposition are comparing it to the libraries listed below
- ☆18Updated 9 months ago
- Personal implementation of ASIF by Antonio Norelli☆25Updated 11 months ago
- ☆34Updated last year
- Simple and scalable tools for data-driven pretraining data selection.☆22Updated 2 months ago
- This is the official repository for the "Towards Vision-Language Mechanistic Interpretability: A Causal Tracing Tool for BLIP" paper acce…☆22Updated last year
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆36Updated 2 years ago
- Universal Neurons in GPT2 Language Models☆27Updated 10 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆73Updated 4 months ago
- ☆31Updated 3 months ago
- Sparse Autoencoder Training Library☆48Updated 5 months ago
- ☆19Updated last week
- ☆13Updated last year
- Sparse and discrete interpretability tool for neural networks☆62Updated last year
- Replicating and dissecting the git-re-basin project in one-click-replication Colabs☆36Updated 2 years ago
- ☆26Updated last year
- Efficient Scaling laws and collaborative pretraining.☆16Updated 2 months ago
- ☆52Updated 6 months ago
- ☆95Updated last year
- This repository contains the code used for the experiments in the paper "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity…☆25Updated last year
- ☆23Updated 2 months ago
- Latest Weight Averaging (NeurIPS HITY 2022)☆30Updated last year
- ☆33Updated 7 months ago
- ☆22Updated 2 months ago
- ☆51Updated 11 months ago
- Official code for the paper: "Metadata Archaeology"☆19Updated last year
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆58Updated last year
- Code for the paper "The Journey, Not the Destination: How Data Guides Diffusion Models"☆22Updated last year
- ☆17Updated last year
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆25Updated 5 months ago
- Understanding how features learned by neural networks evolve throughout training☆34Updated 6 months ago