rmovva / HypotheSAEsLinks
Hypothesizing interpretable relationships in text datasets using sparse autoencoders.
☆39Updated this week
Alternatives and similar repositories for HypotheSAEs
Users that are interested in HypotheSAEs are comparing it to the libraries listed below
Sorting:
- ☆104Updated 6 months ago
- Discovering Data-driven Hypotheses in the Wild☆104Updated 2 months ago
- Official implementation of "BERTs are Generative In-Context Learners"☆32Updated 4 months ago
- Understanding how features learned by neural networks evolve throughout training☆36Updated 9 months ago
- A lightweight library for Bayesian analysis of LLM evals (ICML 2025 Spotlight Position Paper)☆19Updated 2 months ago
- PAIR.withgoogle.com and friend's work on interpretability methods☆195Updated 3 weeks ago
- Dataset and evaluation suite enabling LLM instruction-following for scientific literature understanding.☆40Updated 4 months ago
- Code for "Counterfactual Token Generation in Large Language Models", Arxiv 2024.☆28Updated 9 months ago
- ☆35Updated 2 years ago
- ☆245Updated 4 months ago
- Evaluate uncertainty, calibration, accuracy, and fairness of LLMs on real-world survey data!☆24Updated 4 months ago
- CausalGym: Benchmarking causal interpretability methods on linguistic tasks☆46Updated 8 months ago
- A collection of various LLM sampling methods implemented in pure Pytorch☆23Updated 8 months ago
- ☆22Updated last month
- The Foundation Model Transparency Index☆82Updated last year
- Official implementation of the ACL 2024: Scientific Inspiration Machines Optimized for Novelty☆84Updated last year
- State-of-the-art paired encoder and decoder models (17M-1B params)☆38Updated last week
- Sparse Autoencoder Training Library☆54Updated 3 months ago
- An attribution library for LLMs☆42Updated 10 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆202Updated last week
- ☆64Updated last week
- ☆50Updated 2 months ago
- ☆28Updated 5 months ago
- CiteME is a benchmark designed to test the abilities of language models in finding papers that are cited in scientific texts.☆48Updated 9 months ago
- ☆26Updated 2 years ago
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆52Updated 10 months ago
- Documenting large text datasets 🖼️ 📚☆12Updated 7 months ago
- Sparse and discrete interpretability tool for neural networks☆63Updated last year
- PyTorch library for Active Fine-Tuning☆88Updated 5 months ago
- SDLG is an efficient method to accurately estimate aleatoric semantic uncertainty in LLMs☆26Updated last year