redwoodresearch / Text-Steganography-Benchmark
Code for Preventing Language Models From Hiding Their Reasoning, which evaluates defenses against LLM steganography.
☆13Updated 9 months ago
Related projects ⓘ
Alternatives and complementary repositories for Text-Steganography-Benchmark
- This is the official repository for the "Towards Vision-Language Mechanistic Interpretability: A Causal Tracing Tool for BLIP" paper acce…☆17Updated 6 months ago
- Measuring the situational awareness of language models☆33Updated 8 months ago
- ☆49Updated 6 months ago
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆63Updated 8 months ago
- Situational Awareness Dataset☆19Updated 2 weeks ago
- ☆28Updated last year
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆25Updated last year
- Röttger et al. (2023): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆61Updated 10 months ago
- ☆32Updated last year
- Flow of Reasoning: Training LLMs for Divergent Problem Solving with Minimal Examples☆38Updated last month
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆61Updated 4 months ago
- Adversarial Attacks on GPT-4 via Simple Random Search [Dec 2023]☆42Updated 6 months ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆83Updated 7 months ago
- Repo for: When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment☆38Updated last year
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆46Updated 2 months ago
- Replicating O1 inference-time scaling laws☆48Updated last month
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆60Updated last month
- ☆44Updated last month
- WMDP is a LLM proxy benchmark for hazardous knowledge in bio, cyber, and chemical security. We also release code for RMU, an unlearning m…☆78Updated 6 months ago
- ☆24Updated last month
- Weak-to-Strong Jailbreaking on Large Language Models☆64Updated 8 months ago
- [ICLR'24 Spotlight] A language model (LM)-based emulation framework for identifying the risks of LM agents with tool use☆113Updated 7 months ago
- Tree prompting: easy-to-use scikit-learn interface for improved prompting.☆33Updated last year
- A toolkit for describing model features and intervening on those features to steer behavior.☆69Updated this week
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆25Updated 5 months ago
- This repository contains the code and data for the paper "SelfIE: Self-Interpretation of Large Language Model Embeddings" by Haozhe Chen,…☆37Updated 7 months ago
- Codebase for Inference-Time Policy Adapters☆21Updated last year
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated 8 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆48Updated 7 months ago
- Unofficial Implementation of Chain-of-Thought Reasoning Without Prompting☆16Updated 7 months ago