sciai-lab / Truth_is_UniversalLinks
☆25Updated 7 months ago
Alternatives and similar repositories for Truth_is_Universal
Users that are interested in Truth_is_Universal are comparing it to the libraries listed below
Sorting:
- ☆51Updated last year
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆60Updated 7 months ago
- Steering Llama 2 with Contrastive Activation Addition☆158Updated last year
- ☆37Updated last month
- ☆44Updated 3 months ago
- ☆85Updated 10 months ago
- General-purpose activation steering library☆78Updated last month
- ☆101Updated 3 weeks ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆72Updated 3 months ago
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆35Updated last year
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆47Updated 8 months ago
- A library for efficient patching and automatic circuit discovery.☆67Updated 2 months ago
- Conformal Language Modeling☆30Updated last year
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆11Updated 5 months ago
- ☆44Updated last year
- CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model Behavior☆12Updated 2 years ago
- Sparse probing paper full code.☆58Updated last year
- A resource repository for representation engineering in large language models☆126Updated 7 months ago
- LoFiT: Localized Fine-tuning on LLM Representations☆39Updated 5 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆108Updated 4 months ago
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆35Updated 7 months ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆94Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆95Updated 3 weeks ago
- Materials for EACL2024 tutorial: Transformer-specific Interpretability☆55Updated last year
- ☆29Updated last year
- ☆95Updated 4 months ago
- ☆131Updated 7 months ago
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆37Updated 7 months ago
- ☆95Updated last year
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆77Updated 6 months ago