safety-research / open-source-alignment-fakingLinks
Open Source Replication of Anthropic's Alignment Faking Paper
☆44Updated 3 months ago
Alternatives and similar repositories for open-source-alignment-faking
Users that are interested in open-source-alignment-faking are comparing it to the libraries listed below
Sorting:
- ☆55Updated 9 months ago
- ☆134Updated 3 months ago
- Open source interpretability artefacts for R1.☆154Updated 2 months ago
- ☆24Updated 8 months ago
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆109Updated last year
- A reading list of relevant papers and projects on foundation model annotation☆27Updated 4 months ago
- ☆171Updated 4 months ago
- ☆70Updated this week
- Official Repo for InSTA: Towards Internet-Scale Training For Agents☆50Updated last week
- ☆92Updated 2 months ago
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆91Updated 2 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆173Updated 4 months ago
- Functional Benchmarks and the Reasoning Gap☆88Updated 9 months ago
- Synthetic data generation and benchmark implementation for "Episodic Memories Generation and Evaluation Benchmark for Large Language Mode…☆48Updated 3 months ago
- ☆34Updated 8 months ago
- Code for "Utility Engineering: Analyzing and Controlling Emergent Value Systems in AIs"☆52Updated 4 months ago
- ☆97Updated 2 weeks ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- Red-Teaming Language Models with DSPy☆202Updated 5 months ago
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆28Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods