marcus-jw / Targeted-Manipulation-and-Deception-in-LLMs
Codebase for "On Targeted Manipulation and Deception when Optimizing LLMs for User Feedback". This repo implements a generative multi-turn RL environment with support for agent, user, user feedback, transition and veto models. It also implements KTO and expert iteration for training on user preferences.
☆14Updated 4 months ago
Alternatives and similar repositories for Targeted-Manipulation-and-Deception-in-LLMs:
Users that are interested in Targeted-Manipulation-and-Deception-in-LLMs are comparing it to the libraries listed below
- A library for efficient patching and automatic circuit discovery.☆62Updated 2 months ago
- ☆23Updated 6 months ago
- ☆21Updated 6 months ago
- ☆26Updated this week
- Official Code for our paper: "Language Models Learn to Mislead Humans via RLHF""☆11Updated 6 months ago
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆150Updated 5 months ago
- ☆48Updated 3 months ago
- ☆39Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆71Updated last month
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆28Updated 11 months ago
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity☆43Updated last year
- Algebraic value editing in pretrained language models☆63Updated last year
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆120Updated 7 months ago
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆30Updated 10 months ago
- Directional Preference Alignment☆56Updated 6 months ago
- Dateset Reset Policy Optimization☆30Updated last year
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging☆99Updated last year
- Domain-specific preference (DSP) data and customized RM fine-tuning.☆25Updated last year
- ☆82Updated 8 months ago
- [ICLR 2025] "Training LMs on Synthetic Edit Sequences Improves Code Synthesis" (Piterbarg, Pinto, Fergus)☆18Updated 2 months ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆68Updated 9 months ago
- Code and Data Repo for the CoNLL Paper -- Future Lens: Anticipating Subsequent Tokens from a Single Hidden State☆18Updated last year
- Sparse Autoencoder Training Library☆47Updated 5 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆73Updated 4 months ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆103Updated last year
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆74Updated last year
- ☆91Updated 2 months ago
- Rewarded soups official implementation☆58Updated last year
- ☆12Updated 10 months ago
- Exploring the Limitations of Large Language Models on Multi-Hop Queries☆24Updated last month