☆288Jan 12, 2026Updated 3 months ago
Alternatives and similar repositories for emergent-misalignment
Users that are interested in emergent-misalignment are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Code repo for the model organisms and convergent directions of EM papers.☆62Sep 22, 2025Updated 7 months ago
- A python sdk for LLM finetuning and inference on runpod infrastructure☆30Apr 23, 2026Updated last week
- ☆35Feb 20, 2025Updated last year
- ☆20Nov 15, 2024Updated last year
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆30May 23, 2024Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- ☆18Apr 7, 2025Updated last year
- Improving Steering Vectors by Targeting Sparse Autoencoder Features☆27Nov 20, 2024Updated last year
- Improving Alignment and Robustness with Circuit Breakers☆261Sep 24, 2024Updated last year
- Persona Vectors: Monitoring and Controlling Character Traits in Language Models☆409Apr 22, 2026Updated 2 weeks ago
- Co-Supervised Learning: Improving Weak-to-Strong Generalization with Hierarchical Mixture of Experts☆16Feb 26, 2024Updated 2 years ago
- ☆43May 9, 2025Updated 11 months ago
- Comprehensive Assessment of Trustworthiness in Multimodal Foundation Models☆30Mar 15, 2025Updated last year
- ☆213Oct 14, 2025Updated 6 months ago
- Situational Awareness Dataset☆50Dec 14, 2024Updated last year
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Representation Engineering: A Top-Down Approach to AI Transparency☆989Aug 14, 2024Updated last year
- Unified access to Large Language Model modules using NNsight☆108Apr 16, 2026Updated 2 weeks ago
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆58Oct 30, 2025Updated 6 months ago
- ☆28Sep 5, 2024Updated last year
- Sparse Autoencoder for Mechanistic Interpretability☆296Jul 20, 2024Updated last year
- ☆59Feb 12, 2024Updated 2 years ago
- ☆20Apr 10, 2025Updated last year
- A benchmark for mechanistic discovery of circuits in Transformers☆17Dec 15, 2024Updated last year
- ☆106Aug 8, 2024Updated last year
- Deploy open-source AI quickly and easily - Special Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- A library for mechanistic anomaly detection☆22Jan 9, 2025Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆88Mar 7, 2025Updated last year
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"☆61Jan 15, 2025Updated last year
- ☆130Feb 10, 2026Updated 2 months ago
- [ICLR 2025] FLAT: LLM Unlearning via Loss Adjustment with Only Forget Data☆14Feb 26, 2025Updated last year
- Tools for optimizing steering vectors in LLMs.☆21Apr 10, 2025Updated last year
- Designing a Dashboard for Transparency and Control of Conversational AI, https://arxiv.org/abs/2406.07882☆37Oct 7, 2025Updated 6 months ago
- ☆38Apr 30, 2024Updated 2 years ago
- ☆412Aug 21, 2025Updated 8 months ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- ☆13Mar 23, 2025Updated last year
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆193Jan 16, 2025Updated last year
- ☆17Dec 21, 2023Updated 2 years ago
- A gym interface for AI safety gridworlds created in pycolab.☆18May 12, 2022Updated 3 years ago
- ☆14Apr 8, 2026Updated 3 weeks ago
- ☆21Mar 18, 2026Updated last month
- A library for mechanistic interpretability of GPT-style language models☆3,383Updated this week