goodfire-ai / r1-interpretabilityLinks
Open source interpretability artefacts for R1.
☆163Updated 6 months ago
Alternatives and similar repositories for r1-interpretability
Users that are interested in r1-interpretability are comparing it to the libraries listed below
Sorting:
- ☆143Updated 2 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆222Updated last week
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆233Updated 3 months ago
- A toolkit for describing model features and intervening on those features to steer behavior.☆212Updated last year
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆304Updated 2 weeks ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 8 months ago
- Open source replication of Anthropic's Crosscoders for Model Diffing☆59Updated last year
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆226Updated 10 months ago
- ☆117Updated 3 weeks ago
- ☆125Updated 10 months ago
- Training-Ready RL Environments + Evals☆164Updated last week
- rl from zero pretrain, can it be done? yes.☆280Updated last month
- Steering vectors for transformer language models in Pytorch / Huggingface☆128Updated 8 months ago
- Public repository for "The Surprising Effectiveness of Test-Time Training for Abstract Reasoning"☆336Updated 11 months ago
- ☆226Updated 2 weeks ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 9 months ago
- ☆56Updated 11 months ago
- ☆106Updated 3 weeks ago
- Repository for the paper Stream of Search: Learning to Search in Language☆151Updated 9 months ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆82Updated 7 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆193Updated last year
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆83Updated 11 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆140Updated 4 months ago
- Persona Vectors: Monitoring and Controlling Character Traits in Language Models☆281Updated 3 months ago
- ☆135Updated 7 months ago
- ☆124Updated 8 months ago
- ☆197Updated 6 months ago
- ⚓️ Repository for the "Thought Anchors: Which LLM Reasoning Steps Matter?" paper.☆89Updated 2 weeks ago
- ⚖️ Awesome LLM Judges ⚖️☆133Updated 6 months ago
- Functional Benchmarks and the Reasoning Gap☆89Updated last year