gkamradt / SnakeBenchLinks
☆88Updated last month
Alternatives and similar repositories for SnakeBench
Users that are interested in SnakeBench are comparing it to the libraries listed below
Sorting:
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 6 months ago
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆45Updated 3 months ago
- Repository for the paper Stream of Search: Learning to Search in Language☆149Updated 6 months ago
- A Qwen .5B reasoning model trained on OpenR1-Math-220k☆14Updated 5 months ago
- The official implementation for paper "Agentic-R1: Distilled Dual-Strategy Reasoning"☆84Updated 2 weeks ago
- Train your own SOTA deductive reasoning model☆103Updated 4 months ago
- Lego for GRPO☆28Updated 2 months ago
- ☆164Updated this week
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆56Updated 2 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆103Updated 4 months ago
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆225Updated this week
- Code for ExploreTom☆84Updated last month
- accompanying material for sleep-time compute paper☆99Updated 3 months ago
- ☆118Updated 5 months ago
- ☆144Updated 8 months ago
- [ACL 2025] Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆99Updated last month
- Official repo for Learning to Reason for Long-Form Story Generation☆68Updated 3 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 6 months ago
- ☆143Updated last year
- EvaByte: Efficient Byte-level Language Models at Scale☆103Updated 3 months ago
- ☆54Updated last month
- ☆212Updated 5 months ago
- ☆81Updated last month
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆82Updated 2 months ago
- ☆130Updated 4 months ago
- ☆41Updated last year
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆344Updated 7 months ago
- ☆11Updated last year
- rl from zero pretrain, can it be done? we'll see.☆66Updated 2 weeks ago
- Simple repository for training small reasoning models☆32Updated 5 months ago