PaulPauls / llama3_interpretability_sae

A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and fully reproducible.
605Updated last month

Alternatives and similar repositories for llama3_interpretability_sae:

Users that are interested in llama3_interpretability_sae are comparing it to the libraries listed below