yash-srivastava19 / arrakisLinks
Arrakis is a library to conduct, track and visualize mechanistic interpretability experiments.
☆31Updated 8 months ago
Alternatives and similar repositories for arrakis
Users that are interested in arrakis are comparing it to the libraries listed below
Sorting:
- Mechanistic Interpretability Visualizations using React☆303Updated last year
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆232Updated last year
- Open source replication of Anthropic's Crosscoders for Model Diffing☆63Updated last year
- ☆83Updated 9 months ago
- Engine for collecting, uploading, and downloading model activations☆24Updated 8 months ago
- Attribution-based Parameter Decomposition☆33Updated 6 months ago
- ☆144Updated 3 months ago
- This was designed for interp researchers who want to do research on or with interp agents to give quality of life improvements and fix …☆48Updated this week
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆234Updated last week
- 🧠 Starter templates for doing interpretability research☆74Updated 2 years ago
- Extract full next-token probabilities via language model APIs☆248Updated last year
- Steering vectors for transformer language models in Pytorch / Huggingface☆134Updated 10 months ago
- Course Materials for Interpretability of Large Language Models (0368.4264) at Tel Aviv University☆227Updated 3 weeks ago
- ☆132Updated 2 years ago
- Unified access to Large Language Model modules using NNsight☆70Updated last month
- A tiny easily hackable implementation of a feature dashboard.☆15Updated 2 months ago
- Experiments with representation engineering☆13Updated last year
- Utilities for the HuggingFace transformers library☆72Updated 2 years ago
- Sparse Autoencoder Training Library☆56Updated 7 months ago
- A toolkit that provides a range of model diffing techniques including a UI to visualize them interactively.☆43Updated this week
- ☆29Updated last year
- Open source interpretability artefacts for R1.☆165Updated 8 months ago
- we got you bro☆36Updated last year
- Applying SAEs for fine-grained control☆25Updated last year
- ☆112Updated 10 months ago
- ☆36Updated last year
- An introduction to LLM Sampling☆79Updated last year
- ☆58Updated last year
- ☆29Updated last year
- A collection of different ways to implement accessing and modifying internal model activations for LLMs☆19Updated last year