revalo / tree-diffusionLinks
Diffusion on syntax trees for program synthesis
☆478Updated last year
Alternatives and similar repositories for tree-diffusion
Users that are interested in tree-diffusion are comparing it to the libraries listed below
Sorting:
- LLM verified with Monte Carlo Tree Search☆284Updated 8 months ago
- Reasoning Computers. Lambda Calculus, Fully Differentiable. Also Neural Stacks, Queues, Arrays, Lists, Trees, and Latches.☆284Updated last year
- Code for the Fractured Entangled Representation Hypothesis position paper!☆217Updated last month
- Official codebase for the paper "Beyond A* Better Planning with Transformers via Search Dynamics Bootstrapping".☆375Updated last year
- Visualizing the internal board state of a GPT trained on chess PGN strings, and performing interventions on its internal board state and …☆218Updated last year
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆198Updated last year
- Visualize the intermediate output of Mistral 7B☆381Updated 11 months ago
- ☆211Updated 4 months ago
- A character-level language diffusion model trained on Tiny Shakespeare☆615Updated last month
- ☆177Updated 3 weeks ago
- Domain Specific Language for the Abstraction and Reasoning Corpus☆315Updated last year
- A compositional diagramming and animation library as an eDSL in Python☆217Updated last year
- ☆80Updated 6 months ago
- Reverse Engineering the Abstraction and Reasoning Corpus☆327Updated 10 months ago
- ☆249Updated last year
- An interactive exploration of Transformer programming.☆270Updated 2 years ago
- The history files when recording human interaction while solving ARC tasks☆118Updated this week
- Pytorch script hot swap: Change code without unloading your LLM from VRAM☆125Updated 8 months ago
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆355Updated last year
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆343Updated last year
- Gradient descent is cool and all, but what if we could delete it?☆104Updated 4 months ago
- Losslessly encode text natively with arithmetic coding and HuggingFace Transformers☆76Updated last month
- The boundary of neural network trainability is fractal☆221Updated last year
- ☆163Updated 3 weeks ago
- Our solution for the arc challenge 2024☆186Updated 6 months ago
- Stop messing around with finicky sampling parameters and just use DRµGS!☆359Updated last year
- ☆550Updated last year
- An interactive HTML pretty-printer for machine learning research in IPython notebooks.☆457Updated 4 months ago
- A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and full…☆628Updated 9 months ago
- Draw more samples☆198Updated last year