mlepori1 / NeuroSurgeon
NeuroSurgeon is a package that enables researchers to uncover and manipulate subnetworks within models in Huggingface Transformers
☆41Updated 2 months ago
Alternatives and similar repositories for NeuroSurgeon:
Users that are interested in NeuroSurgeon are comparing it to the libraries listed below
- A library for efficient patching and automatic circuit discovery.☆64Updated 2 weeks ago
- ☆93Updated 3 weeks ago
- ☆92Updated 2 months ago
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆46Updated 7 months ago
- This repository contains the code used for the experiments in the paper "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity…☆25Updated last year
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆11Updated 3 months ago
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆31Updated 11 months ago
- Sparse probing paper full code.☆56Updated last year
- ☆31Updated last week
- ☆66Updated 3 years ago
- ☆223Updated 7 months ago
- Sparse Autoencoder Training Library☆49Updated last week
- ☆42Updated last year
- Materials for EACL2024 tutorial: Transformer-specific Interpretability☆50Updated last year
- ☆121Updated last year
- ☆70Updated 2 months ago
- ☆114Updated 9 months ago
- ☆62Updated last year
- ☆64Updated 2 years ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆99Updated 2 months ago
- Data for "Datamodels: Predicting Predictions with Training Data"☆97Updated last year
- How do transformer LMs encode relations?☆48Updated last year
- ☆30Updated 10 months ago
- Mechanistic Interpretability for Transformer Models☆50Updated 2 years ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆72Updated 2 months ago
- Attribution-based Parameter Decomposition☆20Updated this week
- ☆93Updated last year
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆91Updated 3 years ago
- Steering Llama 2 with Contrastive Activation Addition☆147Updated 11 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆172Updated this week