OATML / semantic-entropy-probesLinks
☆49Updated last year
Alternatives and similar repositories for semantic-entropy-probes
Users that are interested in semantic-entropy-probes are comparing it to the libraries listed below
Sorting:
- ☆57Updated 2 years ago
- ☆103Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆158Updated 6 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆68Updated last year
- ☆52Updated 8 months ago
- ☆103Updated 2 years ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆85Updated 9 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆188Updated 8 months ago
- ☆40Updated last year
- code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"☆137Updated last year
- Steering Llama 2 with Contrastive Activation Addition☆201Updated last year
- ☆181Updated last year
- ☆95Updated last year
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆160Updated last month
- [ICLR 2025] General-purpose activation steering library☆130Updated 3 months ago
- AI Logging for Interpretability and Explainability🔬☆135Updated last year
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆82Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Updated last year
- Official Code Repository for LM-Steer Paper: "Word Embeddings Are Steers for Language Models" (ACL 2024 Outstanding Paper Award)☆133Updated 5 months ago
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆12Updated 10 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆126Updated last year
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆80Updated last year
- PASTA: Post-hoc Attention Steering for LLMs☆132Updated last year
- ☆51Updated 2 years ago
- ☆29Updated last year
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆123Updated last year
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆57Updated last month
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆100Updated 2 years ago
- Code for In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering☆196Updated 10 months ago
- ☆65Updated 9 months ago