milesaturpin / cot-unfaithfulnessLinks
☆48Updated last year
Alternatives and similar repositories for cot-unfaithfulness
Users that are interested in cot-unfaithfulness are comparing it to the libraries listed below
Sorting:
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆12Updated 8 months ago
- [ICLR 2025] Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"☆61Updated 3 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆80Updated 6 months ago
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆115Updated 7 months ago
- ☆29Updated last year
- ☆57Updated 2 years ago
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆54Updated 11 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆134Updated 3 months ago
- [ICLR 2025] General-purpose activation steering library☆107Updated 2 weeks ago
- ☆37Updated 9 months ago
- Forcing Diffuse Distributions out of Language Models☆17Updated last year
- [EMNLP 2025 Main] ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆35Updated last month
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆27Updated last year
- A library for efficient patching and automatic circuit discovery.☆77Updated 2 months ago
- ☆61Updated last year
- ☆41Updated last year
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆80Updated last year
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆63Updated 10 months ago
- Algebraic value editing in pretrained language models☆66Updated last year
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]☆18Updated last year
- ☆97Updated last year
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆97Updated 2 years ago
- ☆45Updated last year
- Steering Llama 2 with Contrastive Activation Addition☆182Updated last year
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆77Updated last year
- AI Logging for Interpretability and Explainability🔬☆128Updated last year
- ☆63Updated 6 months ago
- The official repository of 'Unnatural Language Are Not Bugs but Features for LLMs'☆22Updated 4 months ago
- Augmenting Statistical Models with Natural Language Parameters☆27Updated last year
- ☆31Updated 2 years ago