AIRI-Institute / SAE-ReasoningLinks
☆92Updated 10 months ago
Alternatives and similar repositories for SAE-Reasoning
Users that are interested in SAE-Reasoning are comparing it to the libraries listed below
Sorting:
- Function Vectors in Large Language Models (ICLR 2024)☆190Updated 9 months ago
- Performant framework for training, analyzing and visualizing Sparse Autoencoders (SAEs) and their frontier variants.☆175Updated last week
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆163Updated 7 months ago
- FeatureAlignment = Alignment + Mechanistic Interpretability☆34Updated 10 months ago
- ☆53Updated 9 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆238Updated this week
- Open source replication of Anthropic's Crosscoders for Model Diffing☆63Updated last year
- Steering Llama 2 with Contrastive Activation Addition☆207Updated last year
- ☆195Updated last year
- [ICLR 2025] General-purpose activation steering library☆138Updated 4 months ago
- ☆103Updated 2 years ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆85Updated 10 months ago
- ☆69Updated 10 months ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆80Updated last year
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆124Updated last year
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆70Updated 2 weeks ago
- ☆205Updated 3 months ago
- ☆58Updated 2 years ago
- ☆99Updated last year
- [ACL'25 Oral] What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆75Updated 7 months ago
- A library for efficient patching and automatic circuit discovery.☆86Updated last month
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆57Updated 3 months ago
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆75Updated 7 months ago
- This repository contains the code used for the experiments in the paper "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity…☆30Updated 3 months ago
- ☆142Updated last month
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆83Updated last year
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆64Updated 5 months ago
- ☆46Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆127Updated last year
- Collection of Reverse Engineering in Large Model☆36Updated last year