MaheepChaudhary / SAE-RavelLinks
Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the paper "Evaluating Open-Source Sparse Autoencoders on Disentangling Factual Knowledge in GPT-2 Small"
☆12Updated 6 months ago
Alternatives and similar repositories for SAE-Ravel
Users that are interested in SAE-Ravel are comparing it to the libraries listed below
Sorting:
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆53Updated 10 months ago
- ☆47Updated last year
- ☆29Updated last year
- A library for efficient patching and automatic circuit discovery.☆76Updated last month
- [ICLR 2025] General-purpose activation steering library☆94Updated 3 weeks ago
- ☆96Updated last year
- Augmenting Statistical Models with Natural Language Parameters☆27Updated 11 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆63Updated 9 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆124Updated 2 months ago
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆27Updated last year
- ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆36Updated 6 months ago
- Forcing Diffuse Distributions out of Language Models☆17Updated 11 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆76Updated 5 months ago
- AI Logging for Interpretability and Explainability🔬☆125Updated last year
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆40Updated last year
- ☆53Updated 2 years ago
- ☆47Updated last month
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆79Updated 8 months ago
- Sparse probing paper full code.☆59Updated last year
- ☆90Updated last year
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆78Updated last year
- LoFiT: Localized Fine-tuning on LLM Representations☆40Updated 7 months ago
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆94Updated 3 years ago
- [NeurIPS 2023 D&B Track] Code and data for paper "Revisiting Out-of-distribution Robustness in NLP: Benchmarks, Analysis, and LLMs Evalua…☆34Updated 2 years ago
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]☆18Updated last year
- ☆63Updated 2 years ago
- Exploring the Limitations of Large Language Models on Multi-Hop Queries☆27Updated 5 months ago
- ☆15Updated this week
- ☆15Updated last year
- Restore safety in fine-tuned language models through task arithmetic☆28Updated last year