adamkarvonen / chess_llm_interpretability
Visualizing the internal board state of a GPT trained on chess PGN strings, and performing interventions on its internal board state and representation of player Elo.
☆202Updated 4 months ago
Alternatives and similar repositories for chess_llm_interpretability:
Users that are interested in chess_llm_interpretability are comparing it to the libraries listed below
- A repo to evaluate various LLM's chess playing abilities.☆79Updated 11 months ago
- Mistral7B playing DOOM☆130Updated 8 months ago
- ☆124Updated last week
- Grandmaster-Level Chess Without Search☆560Updated 2 months ago
- LLM verified with Monte Carlo Tree Search☆270Updated last month
- An implementation of bucketMul LLM inference☆215Updated 8 months ago
- A repository for training nanogpt-based Chess playing language models.☆23Updated 10 months ago
- run paligemma in real time☆131Updated 10 months ago
- ☆143Updated last year
- a curated list of data for reasoning ai☆132Updated 7 months ago
- Teaching transformers to play chess☆119Updated last month
- a small code base for training large models☆288Updated 3 months ago
- A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and full…☆601Updated 3 months ago
- The history files when recording human interaction while solving ARC tasks☆97Updated this week
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆137Updated last month
- Cost aware hyperparameter tuning algorithm☆148Updated 8 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆186Updated 9 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- MiniHF is an inference, human preference data collection, and fine-tuning tool for local language models. It is intended to help the user…☆168Updated this week
- Simple Transformer in Jax☆136Updated 9 months ago
- Full finetuning of large language models without large memory requirements☆93Updated last year
- Long context evaluation for large language models☆202Updated 2 weeks ago
- Visualize the intermediate output of Mistral 7B☆344Updated 2 months ago
- Our solution for the arc challenge 2024☆110Updated 3 weeks ago
- Fast bare-bones BPE for modern tokenizer training☆149Updated 5 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆122Updated 11 months ago
- Autograd to GPT-2 completely from scratch☆112Updated 2 weeks ago
- Draw more samples☆186Updated 9 months ago
- Implement recursion using English as the programming language and an LLM as the runtime.☆137Updated last year
- ☆124Updated this week