A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and fully reproducible.
☆634Mar 23, 2025Updated last year
Alternatives and similar repositories for llama3_interpretability_sae
Users that are interested in llama3_interpretability_sae are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Minimal LLM inference in Rust☆1,035Oct 24, 2024Updated last year
- Sparsify transformers with SAEs and transcoders☆714Apr 27, 2026Updated last week
- pingcap/autoflow is a Graph RAG based and conversational knowledge base tool built with TiDB Serverless Vector Storage. Demo: https://tid…☆2,776Apr 27, 2026Updated last week
- Training Sparse Autoencoders on Language Models☆1,361May 1, 2026Updated last week
- OpenCV+YOLO+LLAVA powered video surveillance system☆792Oct 21, 2025Updated 6 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Proof of thought : LLM-based reasoning using Z3 theorem proving with multiple backend support (SMT2 and JSON DSL)☆371Apr 2, 2026Updated last month
- Felafax is building AI infra for non-NVIDIA GPUs