PaulPauls / llama3_interpretability_saeLinks
A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and fully reproducible.
☆622Updated 5 months ago
Alternatives and similar repositories for llama3_interpretability_sae
Users that are interested in llama3_interpretability_sae are comparing it to the libraries listed below
Sorting:
- LLM Analytics☆677Updated 10 months ago
- Visualize the intermediate output of Mistral 7B☆368Updated 7 months ago
- A library for making RepE control vectors☆634Updated 8 months ago
- Bayesian Optimization as a Coverage Tool for Evaluating LLMs. Accurate evaluation (benchmarking) that's 10 times faster with just a few l…☆285Updated last month
- Felafax is building AI infra for non-NVIDIA GPUs☆566Updated 7 months ago
- Official codebase for the paper "Beyond A* Better Planning with Transformers via Search Dynamics Bootstrapping".