drubinstein / pokemonred_pufferLinks
☆174Updated 4 months ago
Alternatives and similar repositories for pokemonred_puffer
Users that are interested in pokemonred_puffer are comparing it to the libraries listed below
Sorting:
- Visualizing the internal board state of a GPT trained on chess PGN strings, and performing interventions on its internal board state and …☆217Updated last year
- Grandmaster-Level Chess Without Search☆593Updated 10 months ago
- Official codebase for the paper "Beyond A* Better Planning with Transformers via Search Dynamics Bootstrapping".☆375Updated last year
- ☆237Updated 8 months ago
- A multi-player tournament benchmark that tests LLMs in social reasoning, strategy, and deception. Players engage in public and private co…☆293Updated 3 months ago
- A character-level language diffusion model trained on Tiny Shakespeare☆330Updated this week
- Pytorch script hot swap: Change code without unloading your LLM from VRAM☆124Updated 6 months ago
- Code for the Fractured Entangled Representation Hypothesis position paper!☆204Updated last week
- The history files when recording human interaction while solving ARC tasks☆118Updated 2 weeks ago
- Diffusion on syntax trees for program synthesis☆475Updated last year
- A tiny autograd engine with a Jax-like API☆74Updated 4 months ago
- A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and full…☆624Updated 7 months ago
- This repository contain the simple llama3 implementation in pure jax.☆70Updated 9 months ago
- An interactive HTML pretty-printer for machine learning research in IPython notebooks.☆451Updated 3 months ago
- ☆200Updated 3 months ago
- A repo to evaluate various LLM's chess playing abilities.☆83Updated last year
- Cost aware hyperparameter tuning algorithm☆173Updated last year
- Mistral7B playing DOOM☆138Updated last year
- ☆54Updated 4 months ago
- ☆166Updated 7 months ago
- Teaching transformers to play chess☆141Updated 9 months ago
- ☆162Updated 7 months ago
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆327Updated last year
- Easily train AlphaZero-like agents on any environment you want!☆431Updated last year
- LLM verified with Monte Carlo Tree Search☆282Updated 7 months ago
- ☆248Updated last year
- Simple Transformer in Jax☆139Updated last year
- LeanRL is a fork of CleanRL, where selected PyTorch scripts optimized for performance using compile and cudagraphs.☆648Updated 2 months ago
- Bayesian Optimization as a Coverage Tool for Evaluating LLMs. Accurate evaluation (benchmarking) that's 10 times faster with just a few l…☆286Updated 2 months ago
- ☆41Updated this week