apartresearch / readingwhatwecanLinks
📚📚📚📚📚📚📚📚📚 Reading everything
☆15Updated 3 weeks ago
Alternatives and similar repositories for readingwhatwecan
Users that are interested in readingwhatwecan are comparing it to the libraries listed below
Sorting:
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆208Updated this week
- Mechanistic Interpretability for Transformer Models☆53Updated 3 years ago
- ☆29Updated last year
- ☆19Updated 2 years ago
- ☆64Updated 2 years ago
- ☆104Updated 2 weeks ago
- ☆138Updated 2 months ago
- ☆276Updated last year
- ☆21Updated 4 years ago
- Utilities for the HuggingFace transformers library☆72Updated 2 years ago
- ☆83Updated last year
- Erasing concepts from neural representations with provable guarantees☆236Updated 8 months ago
- A dataset of alignment research and code to reproduce it☆77Updated 2 years ago
- PyTorch and NNsight implementation of AtP* (Kramar et al 2024, DeepMind)☆19Updated 8 months ago
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆191Updated 2 years ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆129Updated 3 years ago
- ☆303Updated last year
- 🧠 Starter templates for doing interpretability research☆74Updated 2 years ago
- A library for bridging Python and HTML/Javascript (via Svelte) for creating interactive visualizations☆199Updated 3 years ago
- The Happy Faces Benchmark☆15Updated 2 years ago
- Tools for understanding how transformer predictions are built layer-by-layer☆530Updated 2 months ago
- METR Task Standard☆162Updated 8 months ago
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆93Updated last year
- ☆102Updated 5 months ago
- See the issue board for the current status of active and prospective projects!☆65Updated 3 years ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆221Updated 9 months ago
- Tools for studying developmental interpretability in neural networks.☆105Updated 3 months ago
- ☆19Updated last year
- Mechanistic Interpretability Visualizations using React☆291Updated 9 months ago
- A collection of different ways to implement accessing and modifying internal model activations for LLMs☆19Updated 11 months ago