yash-srivastava19 / arrakisLinks
Arrakis is a library to conduct, track and visualize mechanistic interpretability experiments.
☆31Updated 6 months ago
Alternatives and similar repositories for arrakis
Users that are interested in arrakis are comparing it to the libraries listed below
Sorting:
- Attribution-based Parameter Decomposition☆31Updated 4 months ago
- 🧠 Starter templates for doing interpretability research☆74Updated 2 years ago
- Engine for collecting, uploading, and downloading model activations☆24Updated 7 months ago
- ☆142Updated 2 months ago
- Open source replication of Anthropic's Crosscoders for Model Diffing☆59Updated last year
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆224Updated 10 months ago
- ☆81Updated 8 months ago
- Experiments with representation engineering☆13Updated last year
- Steering vectors for transformer language models in Pytorch / Huggingface☆127Updated 8 months ago
- A tiny easily hackable implementation of a feature dashboard.☆15Updated 2 weeks ago
- Open source interpretability artefacts for R1.☆163Updated 6 months ago
- ☆131Updated 2 years ago
- Mechanistic Interpretability Visualizations using React☆297Updated 10 months ago
- we got you bro☆36Updated last year
- Sparse Autoencoder Training Library☆55Updated 6 months ago
- ☆29Updated last year
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- ☆36Updated last year
- CausalGym: Benchmarking causal interpretability methods on linguistic tasks☆48Updated 11 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆221Updated last week
- Unified access to Large Language Model modules using NNsight☆55Updated this week
- Extract full next-token probabilities via language model APIs☆247Updated last year
- Erasing concepts from neural representations with provable guarantees☆239Updated 9 months ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆129Updated 3 years ago
- ☆111Updated 8 months ago
- code for training & evaluating Contextual Document Embedding models☆199Updated 5 months ago
- ☆23Updated 4 months ago
- An introduction to LLM Sampling☆79Updated 10 months ago
- Code and Data Repo for the CoNLL Paper -- Future Lens: Anticipating Subsequent Tokens from a Single Hidden State☆20Updated 2 weeks ago
- ☆29Updated last year