yash-srivastava19 / arrakisLinks
Arrakis is a library to conduct, track and visualize mechanistic interpretability experiments.
☆31Updated 3 months ago
Alternatives and similar repositories for arrakis
Users that are interested in arrakis are comparing it to the libraries listed below
Sorting:
- Attribution-based Parameter Decomposition☆28Updated last month
- Open source replication of Anthropic's Crosscoders for Model Diffing☆57Updated 9 months ago
- ☆136Updated 4 months ago
- Engine for collecting, uploading, and downloading model activations☆20Updated 4 months ago
- Sparse Autoencoder Training Library☆54Updated 3 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆207Updated 7 months ago
- ☆81Updated 5 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆202Updated this week
- ☆32Updated last year
- Open source interpretability artefacts for R1.☆157Updated 3 months ago
- code for training & evaluating Contextual Document Embedding models☆196Updated 2 months ago
- we got you bro☆36Updated last year
- ☆124Updated last year
- ☆64Updated this week
- Mechanistic Interpretability Visualizations using React☆272Updated 7 months ago
- Extract full next-token probabilities via language model APIs☆247Updated last year
- Steering vectors for transformer language models in Pytorch / Huggingface☆120Updated 5 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆112Updated last month
- 🧠 Starter templates for doing interpretability research☆73Updated 2 years ago
- Erasing concepts from neural representations with provable guarantees☆232Updated 6 months ago
- ☆51Updated 8 months ago
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆28Updated last year
- An introduction to LLM Sampling☆79Updated 7 months ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆72Updated last year
- Experiments with representation engineering☆12Updated last year
- PyTorch library for Active Fine-Tuning☆87Updated 5 months ago
- ☆104Updated 5 months ago
- A toolkit for describing model features and intervening on those features to steer behavior.☆195Updated 8 months ago
- A library for efficient patching and automatic circuit discovery.☆73Updated 2 weeks ago
- ☆28Updated 10 months ago