goodfire-ai / r1-interpretabilityLinks
Open source interpretability artefacts for R1.
☆163Updated 7 months ago
Alternatives and similar repositories for r1-interpretability
Users that are interested in r1-interpretability are comparing it to the libraries listed below
Sorting:
- A toolkit for describing model features and intervening on those features to steer behavior.☆216Updated last year
- ☆144Updated 2 months ago
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆319Updated last month
- ☆119Updated last month
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆228Updated this week
- Open source replication of Anthropic's Crosscoders for Model Diffing☆62Updated last year
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆234Updated 4 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 8 months ago
- Training-Ready RL Environments + Evals☆182Updated this week
- Steering vectors for transformer language models in Pytorch / Huggingface☆130Updated 9 months ago
- Repository for the paper Stream of Search: Learning to Search in Language☆151Updated 10 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆231Updated 11 months ago
- ☆106Updated last month
- ☆229Updated this week
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 10 months ago
- ☆199Updated 7 months ago
- Public repository for "The Surprising Effectiveness of Test-Time Training for Abstract Reasoning"☆340Updated 3 weeks ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆150Updated 5 months ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆84Updated 8 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆194Updated last year
- ☆124Updated 9 months ago
- ⚓️ Repository for the "Thought Anchors: Which LLM Reasoning Steps Matter?" paper.☆92Updated last month
- ☆58Updated last year
- Applying SAEs for fine-grained control☆24Updated 11 months ago
- rl from zero pretrain, can it be done? yes.☆281Updated 2 months ago
- ☆189Updated last year
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆84Updated last year
- ☆136Updated 8 months ago
- ☆37Updated 9 months ago
- open source interpretability platform 🧠☆509Updated last week