stanfordnlp / pyvene
Stanford NLP Python library for understanding and improving PyTorch models via interventions
☆739Updated this week
Alternatives and similar repositories for pyvene:
Users that are interested in pyvene are comparing it to the libraries listed below
- Training Sparse Autoencoders on Language Models☆745Updated this week
- Sparsify transformers with SAEs and transcoders☆524Updated this week
- Tools for understanding how transformer predictions are built layer-by-layer☆489Updated 11 months ago
- This repository collects all relevant resources about interpretability in LLMs☆343Updated 6 months ago
- Sparse Autoencoder for Mechanistic Interpretability☆243Updated 9 months ago
- Using sparse coding to find distributed representations used by neural networks.☆240Updated last year
- ☆458Updated 9 months ago
- Mechanistic Interpretability Visualizations using React☆242Updated 4 months ago
- ☆280Updated 2 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆199Updated 4 months ago
- Representation Engineering: A Top-Down Approach to AI Transparency☆827Updated 8 months ago
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆555Updated this week
- ☆221Updated 7 months ago
- Interpretability for sequence generation models 🐛 🔍☆413Updated last week
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆837Updated this week
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,464Updated 2 months ago
- A library for mechanistic interpretability of GPT-style language models☆2,116Updated this week
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆521Updated 3 months ago
- ☆111Updated 5 months ago
- ☆265Updated last year
- ☆148Updated 5 months ago
- ☆114Updated 9 months ago
- Locating and editing factual associations in GPT (NeurIPS 2022)☆629Updated last year
- A curated list of LLM Interpretability related material - Tutorial, Library, Survey, Paper, Blog, etc..☆233Updated last month
- RewardBench: the first evaluation tool for reward models.☆562Updated 2 months ago
- Mass-editing thousands of facts into a transformer memory (ICLR 2023)☆485Updated last year
- ☆515Updated 5 months ago
- A curated list of Large Language Model (LLM) Interpretability resources.☆1,311Updated 4 months ago
- ☆205Updated last year
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,482Updated this week