PaulPauls / llama3_interpretability_saeLinks
A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and fully reproducible.
☆625Updated 6 months ago
Alternatives and similar repositories for llama3_interpretability_sae
Users that are interested in llama3_interpretability_sae are comparing it to the libraries listed below
Sorting:
- LLM Analytics☆685Updated 11 months ago
- Bayesian Optimization as a Coverage Tool for Evaluating LLMs. Accurate evaluation (benchmarking) that's 10 times faster with just a few l…☆285Updated 2 weeks ago
- Visualize the intermediate output of Mistral 7B☆371Updated 8 months ago
- A library for making RepE control vectors☆643Updated last week
- Felafax is building AI infra for non-NVIDIA GPUs☆567Updated 8 months ago
- Official codebase for the paper "Beyond A* Better Planning with Transformers via Search Dynamics Bootstrapping".