efarrell1 / train_sparse_autoencoderLinks
Trains Sparse Autoencoders based on outputs from language models
☆11Updated last year
Alternatives and similar repositories for train_sparse_autoencoder
Users that are interested in train_sparse_autoencoder are comparing it to the libraries listed below
Sorting:
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆141Updated 4 months ago
- ☆51Updated 2 years ago
- ☆63Updated 8 months ago
- [EMNLP 2025 Main] ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆38Updated 3 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆84Updated 8 months ago
- ☆57Updated 2 years ago
- Confidence Regulation Neurons in Language Models (NeurIPS 2024)☆14Updated 9 months ago
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆12Updated 9 months ago
- LoFiT: Localized Fine-tuning on LLM Representations☆43Updated 10 months ago
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆40Updated last year
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆82Updated 10 months ago
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆56Updated 3 weeks ago
- [ICLR 2025] General-purpose activation steering library☆119Updated 2 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆184Updated 7 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆67Updated 11 months ago
- Code repo for the model organisms and convergent directions of EM papers.☆36Updated last month
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆42Updated last year
- ☆101Updated 2 years ago
- ☆136Updated this week
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Updated last year
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"