marcus-jw / Targeted-Manipulation-and-Deception-in-LLMs
Codebase for "On Targeted Manipulation and Deception when Optimizing LLMs for User Feedback". This repo implements a generative multi-turn RL environment with support for agent, user, user feedback, transition and veto models. It also implements KTO and expert iteration for training on user preferences.
☆15Updated 5 months ago
Alternatives and similar repositories for Targeted-Manipulation-and-Deception-in-LLMs
Users that are interested in Targeted-Manipulation-and-Deception-in-LLMs are comparing it to the libraries listed below
Sorting:
- A library for efficient patching and automatic circuit discovery.☆64Updated 3 weeks ago
- Official Code for our paper: "Language Models Learn to Mislead Humans via RLHF""☆14Updated 7 months ago
- ☆82Updated 9 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆72Updated 2 months ago
- Algebraic value editing in pretrained language models☆65Updated last year
- Sparse Autoencoder Training Library☆49Updated last week
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆31Updated 11 months ago
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆75Updated last year
- ☆40Updated last year
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity☆43Updated last year
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆104Updated last year
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆120Updated 8 months ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆69Updated 10 months ago
- A repo for RLHF training and BoN over LLMs, with support for reward model ensembles.☆43Updated 3 months ago
- ☆14Updated last year
- Rewarded soups official implementation☆57Updated last year
- ☆23Updated 7 months ago
- ☆33Updated last week
- Directional Preference Alignment☆57Updated 7 months ago
- ☆92Updated 3 months ago
- For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research.☆113Updated this week
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆155Updated 6 months ago
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆24Updated 11 months ago
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging☆100Updated last year
- ☆58Updated 4 months ago
- ☆24Updated 2 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆73Updated 5 months ago
- ☆31Updated last year
- ☆42Updated last year
- maze datasets for investigating OOD behavior of ML systems☆45Updated 2 weeks ago