felixbinder / introspection_self_predictionLinks
Code for experiments on self-prediction as a way to measure introspection in LLMs
☆16Updated last year
Alternatives and similar repositories for introspection_self_prediction
Users that are interested in introspection_self_prediction are comparing it to the libraries listed below
Sorting:
- Code repo for the model organisms and convergent directions of EM papers.☆41Updated 3 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆158Updated 6 months ago
- Code for reproducing our paper "Low Rank Adapting Models for Sparse Autoencoder Features"☆17Updated 8 months ago
- ☆51Updated 2 years ago
- Exploration of automated dataset selection approaches at large scales.☆51Updated 9 months ago
- ☆33Updated 10 months ago
- ☆59Updated 2 years ago
- [ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)☆85Updated last year
- ☆20Updated 6 months ago
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆12Updated 11 months ago
- Code to enable layer-level steering in LLMs using sparse auto encoders☆28Updated 3 months ago
- This repository contains the code used for the experiments in the paper "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity…☆28Updated 2 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆188Updated 8 months ago
- ☆30Updated 7 months ago
- Test-time-training on nearest neighbors for large language models☆49Updated last year
- ☆95Updated last year
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆117Updated 10 months ago
- NeurIPS'24 - LLM Safety Landscape☆36Updated 2 months ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98Updated last year
- An official implementation of "Catastrophic Failure of LLM Unlearning via Quantization" (ICLR 2025)