stanfordnlp / pyveneLinks
Stanford NLP Python library for understanding and improving PyTorch models via interventions
☆793Updated 2 weeks ago
Alternatives and similar repositories for pyvene
Users that are interested in pyvene are comparing it to the libraries listed below
Sorting:
- Tools for understanding how transformer predictions are built layer-by-layer☆514Updated 2 weeks ago
- Training Sparse Autoencoders on Language Models☆919Updated this week
- Sparsify transformers with SAEs and transcoders☆608Updated this week
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆630Updated this week
- This repository collects all relevant resources about interpretability in LLMs☆368Updated 9 months ago
- Mechanistic Interpretability Visualizations using React☆280Updated 8 months ago
- ☆508Updated last year
- Using sparse coding to find distributed representations used by neural networks.☆262Updated last year
- Representation Engineering: A Top-Down Approach to AI Transparency☆865Updated last year
- Sparse Autoencoder for Mechanistic Interpretability☆258Updated last year
- ☆327Updated this week
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆211Updated 8 months ago
- ☆237Updated 10 months ago
- ☆162Updated 9 months ago
- Locating and editing factual associations in GPT (NeurIPS 2022)☆657Updated last year
- Interpretability for sequence generation models 🐛 🔍☆433Updated 3 months ago
- Mass-editing thousands of facts into a transformer memory (ICLR 2023)☆510Updated last year
- ☆223Updated last year
- ☆276Updated last year
- ☆185Updated 9 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆206Updated this week
- ☆111Updated last month
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆542Updated 6 months ago
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆221Updated last week
- Steering Llama 2 with Contrastive Activation Addition☆174Updated last year
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆257Updated 2 months ago
- A library for mechanistic interpretability of GPT-style language models☆2,479Updated last week
- ☆331Updated this week
- ☆184Updated last month
- ☆122Updated last year