ricsonc / transformers-play-chessLinks
a writeup on some experiments on a sequence model for chess games
☆31Updated 4 years ago
Alternatives and similar repositories for transformers-play-chess
Users that are interested in transformers-play-chess are comparing it to the libraries listed below
Sorting:
- A dataset of alignment research and code to reproduce it☆77Updated 2 years ago
- ☆40Updated 2 years ago
- A formalisation of Cartesian Frames, a perspective on embedded agency, in the HOL theorem prover.☆19Updated 3 years ago
- One stop shop for all things carp☆59Updated 3 years ago
- Language-annotated Abstraction and Reasoning Corpus☆93Updated 2 years ago
- ☆71Updated last year
- An interactive exploration of Transformer programming.☆269Updated last year
- An environment for learning formal mathematical reasoning from scratch☆72Updated last year
- Grounding LLM mathematical reasoning with proof assistants.☆64Updated 2 years ago
- An interpreter for RASP as described in the ICML 2021 paper "Thinking Like Transformers"☆320Updated last year
- Pytorch implementation on OpenAI's Procgen ppo-baseline, built from scratch.☆14Updated last year
- Test prompts for GPT-J-6B and the resulting AI-generated texts☆53Updated 4 years ago
- ☆18Updated last year
- A programming language for formal/informal computation.☆41Updated last month
- [ICML 2023] "Outline, Then Details: Syntactically Guided Coarse-To-Fine Code Generation", Wenqing Zheng, S P Sharan, Ajay Kumar Jaiswal, …☆41Updated last year
- The Abstraction and Reasoning Corpus made into a web game☆90Updated last year
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodes☆241Updated 2 years ago
- ☆43Updated 2 years ago
- Inference code for LLaMA models in JAX☆120Updated last year
- ☆61Updated 3 years ago
- Implementing RASP transformer programming language https://arxiv.org/pdf/2106.06981.pdf.☆58Updated 4 years ago
- ☆64Updated 2 years ago
- Drive a browser with Cohere☆72Updated 2 years ago
- Probabilistic LLM evaluations. [CogSci2023; ACL2023]☆73Updated last year
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆87Updated last year
- 🦠 AD in less than 20 lines☆54Updated 4 years ago
- RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.☆67Updated 3 years ago
- URL downloader supporting checkpointing and continuous checksumming.☆19Updated last year
- Python Research Framework☆106Updated 2 years ago
- ☆131Updated 3 years ago