adamkarvonen / chess_llm_interpretabilityLinks
Visualizing the internal board state of a GPT trained on chess PGN strings, and performing interventions on its internal board state and representation of player Elo.
☆213Updated 10 months ago
Alternatives and similar repositories for chess_llm_interpretability
Users that are interested in chess_llm_interpretability are comparing it to the libraries listed below
Sorting:
- Visualize the intermediate output of Mistral 7B☆371Updated 8 months ago
- Teaching transformers to play chess☆139Updated 8 months ago
- A repo to evaluate various LLM's chess playing abilities.☆83Updated last year
- a small code base for training large models☆310Updated 5 months ago
- ☆166Updated 2 months ago
- Mistral7B playing DOOM☆137Updated last year
- A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and full…☆625Updated 6 months ago
- Grandmaster-Level Chess Without Search☆589Updated 8 months ago
- A repository for training nanogpt-based Chess playing language models.☆25Updated last year
- A pure NumPy implementation of Mamba.☆223Updated last year
- MiniHF is an inference, human preference data collection, and fine-tuning tool for local language models. It is intended to help the user…☆181Updated 2 months ago
- Simple Transformer in Jax☆139Updated last year
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆349Updated last year
- Alice in Wonderland code base for experiments and raw experiments data☆131Updated 2 weeks ago
- Open weights language model from Google DeepMind, based on Griffin.☆652Updated 3 months ago
- ☆254Updated 2 years ago
- Bayesian Optimization as a Coverage Tool for Evaluating LLMs. Accurate evaluation (benchmarking) that's 10 times faster with just a few l…☆285Updated 2 weeks ago
- Stop messing around with finicky sampling parameters and just use DRµGS!☆358Updated last year
- Official codebase for the paper "Beyond A* Better Planning with Transformers via Search Dynamics Bootstrapping".☆373Updated last year
- The history files when recording human interaction while solving ARC tasks☆116Updated this week
- ☆144Updated 2 years ago
- ☆248Updated last year
- Pytorch script hot swap: Change code without unloading your LLM from VRAM☆125Updated 5 months ago
- Autograd to GPT-2 completely from scratch☆125Updated last month
- LLM verified with Monte Carlo Tree Search☆281Updated 6 months ago
- a curated list of data for reasoning ai☆137Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated last week
- Reasoning Computers. Lambda Calculus, Fully Differentiable. Also Neural Stacks, Queues, Arrays, Lists, Trees, and Latches.☆274Updated 11 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated 11 months ago
- Benchmark LLM reasoning capability by solving chess puzzles.☆87Updated 5 months ago