goodfire-ai / r1-interpretabilityLinks
Open source interpretability artefacts for R1.
☆158Updated 4 months ago
Alternatives and similar repositories for r1-interpretability
Users that are interested in r1-interpretability are comparing it to the libraries listed below
Sorting:
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆245Updated last week
- ☆139Updated last week
- A toolkit for describing model features and intervening on those features to steer behavior.☆197Updated 9 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆206Updated last week
- rl from zero pretrain, can it be done? yes.☆257Updated this week
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆177Updated 5 months ago
- ☆118Updated 8 months ago
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆229Updated last month
- Open source replication of Anthropic's Crosscoders for Model Diffing☆58Updated 9 months ago
- Public repository for "The Surprising Effectiveness of Test-Time Training for Abstract Reasoning"☆324Updated 9 months ago
- ☆195Updated 5 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 7 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆212Updated 8 months ago
- Persona Vectors: Monitoring and Controlling Character Traits in Language Models☆188Updated 3 weeks ago
- Repository for the paper Stream of Search: Learning to Search in Language☆150Updated 6 months ago
- ☆98Updated 4 months ago
- ☆117Updated 2 weeks ago
- ⚖️ Awesome LLM Judges ⚖️☆122Updated 3 months ago
- ☆187Updated 4 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆122Updated 6 months ago
- ☆98Updated 2 weeks ago
- ☆130Updated 5 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆191Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆124Updated 2 months ago
- ☆53Updated 9 months ago
- open source interpretability platform 🧠☆356Updated this week
- ☆120Updated 6 months ago
- Functional Benchmarks and the Reasoning Gap☆88Updated 10 months ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆73Updated 5 months ago
- Replicating O1 inference-time scaling laws☆89Updated 8 months ago