science-of-finetuning / sparsity-artifacts-crosscodersLinks
Code for the "Overcoming Sparsity Artifacts in Crosscoders to Interpret Chat-Tuning" paper.
☆11Updated last month
Alternatives and similar repositories for sparsity-artifacts-crosscoders
Users that are interested in sparsity-artifacts-crosscoders are comparing it to the libraries listed below
Sorting:
- This repository contains the code used for the experiments in the paper "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity…☆28Updated last year
- Sparse Autoencoder Training Library☆54Updated 4 months ago
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆59Updated last year
- Code repo for the model organisms and convergent directions of EM papers.☆28Updated 2 months ago
- ☆15Updated last year
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆41Updated last year
- ☆107Updated 7 months ago
- Self-Supervised Alignment with Mutual Information☆21Updated last year
- ☆23Updated 7 months ago
- ☆84Updated last year
- A library for efficient patching and automatic circuit discovery.☆76Updated 2 months ago
- Sparse and discrete interpretability tool for neural networks☆63Updated last year
- ☆42Updated 8 months ago
- Codebase for "On Targeted Manipulation and Deception when Optimizing LLMs for User Feedback". This repo implements a generative multi-tur…☆20Updated 9 months ago
- ☆28Updated 7 months ago
- ☆36Updated 2 years ago
- ☆100Updated last year
- Universal Neurons in GPT2 Language Models☆30Updated last year
- Code for reproducing our paper "Low Rank Adapting Models for Sparse Autoencoder Features"☆14Updated 5 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆78Updated 9 months ago
- Official repository for our paper, Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Mode…☆18Updated 10 months ago
- ☆45Updated last year
- Code for the paper "Data Feedback Loops: Model-driven Amplification of Dataset Biases"☆16Updated 3 years ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆67Updated 11 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆83Updated 10 months ago
- The official repository for our paper "The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns …☆16Updated 3 months ago
- Code and Data Repo for the CoNLL Paper -- Future Lens: Anticipating Subsequent Tokens from a Single Hidden State☆19Updated last year
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆54Updated 11 months ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- ☆23Updated 9 months ago