safety-research / open-source-alignment-fakingLinks
Open Source Replication of Anthropic's Alignment Faking Paper
☆54Updated 9 months ago
Alternatives and similar repositories for open-source-alignment-faking
Users that are interested in open-source-alignment-faking are comparing it to the libraries listed below
Sorting:
- Open source interpretability artefacts for R1.☆169Updated 9 months ago
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆96Updated 8 months ago
- ☆22Updated 7 months ago
- Official repo for Learning to Reason for Long-Form Story Generation☆74Updated 9 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 10 months ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆88Updated 10 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- Source code for the collaborative reasoner research project at Meta FAIR.☆112Updated 9 months ago
- The code for paper "EPO: Entropy-regularized Policy Optimization for LLM Agents Reinforcement Learning"☆36Updated 3 months ago
- ☆133Updated 3 months ago
- Official Repo for InSTA: Towards Internet-Scale Training For Agents☆55Updated 6 months ago
- ☆216Updated last week
- ☆61Updated 7 months ago
- Learning to Retrieve by Trying - Source code for Grounding by Trying: LLMs with Reinforcement Learning-Enhanced Retrieval☆51Updated last year
- UQ: Assessing Language Models on Unsolved Questions☆30Updated 5 months ago
- Official codebase for "Quantile Reward Policy Optimization: Alignment with Pointwise Regression and Exact Partition Functions" (Matrenok …☆30Updated last month
- ☆21Updated 6 months ago
- ☆33Updated last year
- ☆91Updated last month
- ☆34Updated 11 months ago
- ☆34Updated last year
- ☆152Updated 4 months ago
- ☆29Updated 2 months ago
- ☆123Updated 11 months ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆127Updated 3 months ago
- ☆136Updated 10 months ago
- Learning to route instances for Human vs AI Feedback (ACL Main '25)☆26Updated 6 months ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆94Updated last year
- Functional Benchmarks and the Reasoning Gap☆89Updated last year
- Code repo for the model organisms and convergent directions of EM papers.☆45Updated 4 months ago