safety-research / open-source-alignment-fakingLinks
Open Source Replication of Anthropic's Alignment Faking Paper
☆52Updated 9 months ago
Alternatives and similar repositories for open-source-alignment-faking
Users that are interested in open-source-alignment-faking are comparing it to the libraries listed below
Sorting:
- ☆80Updated 3 months ago
- Open source interpretability artefacts for R1.☆165Updated 8 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆190Updated 10 months ago
- Leveraging Base Language Models for Few-Shot Synthetic Data Generation☆40Updated 2 months ago
- ☆20Updated 6 months ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆85Updated 9 months ago
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆94Updated 7 months ago
- Synthetic data generation and benchmark implementation for "Episodic Memories Generation and Evaluation Benchmark for Large Language Mode…☆62Updated 3 months ago
- Source code for the collaborative reasoner research project at Meta FAIR.☆111Updated 8 months ago
- ☆92Updated 3 weeks ago
- UQ: Assessing Language Models on Unsolved Questions☆29Updated 4 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆62Updated last year
- Official Repo for InSTA: Towards Internet-Scale Training For Agents☆55Updated 5 months ago
- ☆55Updated last year
- ☆150Updated 4 months ago
- ☆33Updated last year
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆174Updated 11 months ago
- Official Code Release for "Training a Generally Curious Agent"☆43Updated 7 months ago
- Verifiers for LLM Reinforcement Learning☆80Updated 8 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆161Updated 6 months ago
- ☆124Updated 10 months ago
- Official codebase for "Quantile Reward Policy Optimization: Alignment with Pointwise Regression and Exact Partition Functions" (Matrenok …☆29Updated last month
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- ☆63Updated 6 months ago
- ☆127Updated 2 months ago
- Replicating O1 inference-time scaling laws☆91Updated last year
- ☆64Updated this week
- ☆136Updated 9 months ago
- ☆26Updated last year
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆94Updated last year