A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and fully reproducible.
☆630Mar 23, 2025Updated 11 months ago
Alternatives and similar repositories for llama3_interpretability_sae
Users that are interested in llama3_interpretability_sae are comparing it to the libraries listed below
Sorting:
- Minimal LLM inference in Rust☆1,032Oct 24, 2024Updated last year
- Felafax is building AI infra for non-NVIDIA GPUs☆569Jan 24, 2025Updated last year
- OpenCV+YOLO+LLAVA powered video surveillance system☆787Oct 21, 2025Updated 4 months ago
- pingcap/autoflow is a Graph RAG based and conversational knowledge base tool built with TiDB Serverless Vector Storage. Demo: https://tid…☆2,742Jan 9, 2026Updated 2 months ago
- Things you can do with the token embeddings of an LLM☆1,453Dec 1, 2025Updated 3 months ago