PaulPauls / llama3_interpretability_saeLinks
A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and fully reproducible.
☆628Updated 9 months ago
Alternatives and similar repositories for llama3_interpretability_sae
Users that are interested in llama3_interpretability_sae are comparing it to the libraries listed below
Sorting:
- LLM Analytics☆698Updated last year
- Visualize the intermediate output of Mistral 7B☆381Updated 11 months ago
- A library for making RepE control vectors☆672Updated 3 months ago
- Bayesian Optimization as a Coverage Tool for Evaluating LLMs. Accurate evaluation (benchmarking) that's 10 times faster with just a few l…☆287Updated 3 months ago
- Felafax is building AI infra for non-NVIDIA GPUs☆568Updated 11 months ago
- Open weights language model from Google DeepMind, based on Griffin.☆656Updated 6 months ago
- Code behind Arxiv Papers☆539Updated last year
- Official codebase for the paper "Beyond A* Better Planning with Transformers via Search Dynamics Bootstrapping".☆375Updated last year
- Visualizing the internal board state of a GPT trained on chess PGN strings, and performing interventions on its internal board state and …☆219Updated last year
- A multi-player tournament benchmark that tests LLMs in social reasoning, strategy, and deception. Players engage in public and private co…☆295Updated 4 months ago
- An interactive HTML pretty-printer for machine learning research in IPython notebooks.☆457Updated 4 months ago
- Absolute minimalistic implementation of a GPT-like transformer using only numpy (<650 lines).☆254Updated 2 years ago
- ☆747Updated last year
- A scientific instrument for investigating latent spaces☆745Updated last month
- A pure NumPy implementation of Mamba.☆222Updated last year
- Easily train AlphaZero-like agents on any environment you want!☆433Updated last year
- An implementation of bucketMul LLM inference☆222Updated last year
- Achieve the llama3 inference step-by-step, grasp the core concepts, master the process derivation, implement the code.☆613Updated 10 months ago
- Dead Simple LLM Abliteration☆244Updated 10 months ago
- ☆248Updated 9 months ago
- a curated list of data for reasoning ai☆140Updated last year
- ☆249Updated last year
- Stop messing around with finicky sampling parameters and just use DRµGS!☆359Updated last year
- ☆461Updated last month
- See Through Your Models☆401Updated 5 months ago
- Diffusion on syntax trees for program synthesis☆478Updated last year
- A BERT that you can train on a (gaming) laptop.☆210Updated 2 years ago
- Finetune llama2-70b and codellama on MacBook Air without quantization☆450Updated last year
- Things you can do with the token embeddings of an LLM☆1,450Updated 3 weeks ago
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆343Updated last year