PaulPauls / llama3_interpretability_sae
View external linksLinks

A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and fully reproducible.
628Mar 23, 2025Updated 10 months ago

Alternatives and similar repositories for llama3_interpretability_sae

Users that are interested in llama3_interpretability_sae are comparing it to the libraries listed below

Sorting:

Are these results useful?