marcus-jw / Targeted-Manipulation-and-Deception-in-LLMsLinks
Codebase for "On Targeted Manipulation and Deception when Optimizing LLMs for User Feedback". This repo implements a generative multi-turn RL environment with support for agent, user, user feedback, transition and veto models. It also implements KTO and expert iteration for training on user preferences.
☆21Updated 10 months ago
Alternatives and similar repositories for Targeted-Manipulation-and-Deception-in-LLMs
Users that are interested in Targeted-Manipulation-and-Deception-in-LLMs are comparing it to the libraries listed below
Sorting:
- Code repo for the model organisms and convergent directions of EM papers.☆33Updated last month
- A library for efficient patching and automatic circuit discovery.☆78Updated 3 months ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆117Updated last year
- ☆22Updated last year
- Code for "Reasoning to Learn from Latent Thoughts"☆121Updated 7 months ago
- A TinyStories LM with SAEs and transcoders☆13Updated 6 months ago
- ☆33Updated 9 months ago
- ☆99Updated 5 months ago
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆32Updated last year
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆83Updated 11 months ago
- ☆17Updated last year
- ☆127Updated last year
- ☆32Updated 8 months ago
- ☆23Updated last year
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- Sparse Autoencoder Training Library☆55Updated 5 months ago
- Exploration of automated dataset selection approaches at large scales.☆48Updated 7 months ago
- ☆92Updated last year
- Measuring the situational awareness of language models☆38Updated last year
- ☆29Updated 3 months ago
- Code for "Echo Chamber: RL Post-training Amplifies Behaviors Learned in Pretraining"☆23Updated 2 weeks ago
- Official Code for our paper: "Language Models Learn to Mislead Humans via RLHF""☆17Updated last year
- Rewarded soups official implementation☆60Updated 2 years ago
- ☆19Updated 11 months ago
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity☆47Updated last year
- Directional Preference Alignment☆57Updated last year
- Reinforcing General Reasoning without Verifiers☆91Updated 4 months ago
- ☆19Updated 4 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆136Updated 4 months ago
- This repository contains the code used for the experiments in the paper "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity…☆28Updated last year