shauli-ravfogel / adv-kernel-removalLinks
☆12Updated 3 years ago
Alternatives and similar repositories for adv-kernel-removal
Users that are interested in adv-kernel-removal are comparing it to the libraries listed below
Sorting:
- ☆36Updated 3 years ago
- ☆51Updated 2 years ago
- PyTorch and NNsight implementation of AtP* (Kramar et al 2024, DeepMind)☆20Updated 11 months ago
- ☆17Updated 2 years ago
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆32Updated 11 months ago
- ☆16Updated last year
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆41Updated last year
- Code repo for the model organisms and convergent directions of EM papers.☆41Updated 3 months ago
- ☆44Updated 2 years ago
- ☆37Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆161Updated 6 months ago
- Code to reproduce key results accompanying "SAEs (usually) Transfer Between Base and Chat Models"☆13Updated last year
- TACL 2025: Investigating Adversarial Trigger Transfer in Large Language Models☆19Updated 4 months ago
- Official PyTorch Implementation for Meaning Representations from Trajectories in Autoregressive Models (ICLR 2024)☆22Updated last year
- Implementation of Influence Function approximations for differently sized ML models, using PyTorch☆16Updated 2 years ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- Data for "Datamodels: Predicting Predictions with Training Data"☆97Updated 2 years ago
- This is the official repository for the "Towards Vision-Language Mechanistic Interpretability: A Causal Tracing Tool for BLIP" paper acce…☆24Updated last year
- Gemstones: A Model Suite for Multi-Faceted Scaling Laws (NeurIPS 2025)☆30Updated 3 months ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆94Updated last year
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆57Updated 2 months ago
- An official implementation of "Catastrophic Failure of LLM Unlearning via Quantization" (ICLR 2025)☆35Updated 10 months ago
- ☆20Updated 2 months ago
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆28Updated last year
- ☆35Updated 2 years ago
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆12Updated 11 months ago
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆70Updated last year
- The official repository of the paper "On the Exploitability of Instruction Tuning".☆67Updated last year
- ☆112Updated 11 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆83Updated last year