redwoodresearch / alignment_faking_publicLinks
☆62Updated last week
Alternatives and similar repositories for alignment_faking_public
Users that are interested in alignment_faking_public are comparing it to the libraries listed below
Sorting:
- A library for efficient patching and automatic circuit discovery.☆65Updated last month
- Sparse Autoencoder Training Library☆50Updated last month
- Open source replication of Anthropic's Crosscoders for Model Diffing☆55Updated 7 months ago
- ☆120Updated 6 months ago
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆26Updated last year
- Steering vectors for transformer language models in Pytorch / Huggingface☆101Updated 3 months ago
- ☆170Updated last month
- Improving Steering Vectors by Targeting Sparse Autoencoder Features☆20Updated 6 months ago
- ☆31Updated last year
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆34Updated last year
- ☆223Updated 8 months ago
- ☆43Updated 6 months ago
- ☆28Updated last year
- Redwood Research's transformer interpretability tools☆15Updated 3 years ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆93Updated last year
- ☆121Updated last year
- Attribution-based Parameter Decomposition☆23Updated this week
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆180Updated this week
- Code and Data Repo for the CoNLL Paper -- Future Lens: Anticipating Subsequent Tokens from a Single Hidden State☆18Updated last year
- ☆34Updated 2 weeks ago
- Steering Llama 2 with Contrastive Activation Addition☆154Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆89Updated last week
- ☆83Updated 9 months ago
- ControlArena is a suite of realistic settings, mimicking complex deployment environments, for running control evaluations. This is an alp…☆60Updated this week
- Open source interpretability artefacts for R1.☆140Updated last month
- ☆93Updated 3 months ago
- ☆10Updated 10 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆200Updated 5 months ago
- PyTorch and NNsight implementation of AtP* (Kramar et al 2024, DeepMind)☆18Updated 4 months ago
- ☆44Updated last year