axolotl-ai-cloud / grpo_codeLinks
A fast, local, and secure approach for training LLMs for coding tasks using GRPO with WebAssembly and interpreter feedback.
☆38Updated 4 months ago
Alternatives and similar repositories for grpo_code
Users that are interested in grpo_code are comparing it to the libraries listed below
Sorting:
- Train your own SOTA deductive reasoning model☆104Updated 5 months ago
- Simple GRPO scripts and configurations.☆59Updated 6 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 6 months ago
- Verifiers for LLM Reinforcement Learning☆71Updated 4 months ago
- entropix style sampling + GUI☆27Updated 10 months ago
- ☆54Updated 9 months ago
- Simple examples using Argilla tools to build AI☆54Updated 9 months ago
- ☆66Updated 3 months ago
- Entropy Based Sampling and Parallel CoT Decoding☆17Updated 10 months ago
- ☆133Updated 5 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 7 months ago
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆92Updated 7 months ago
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆46Updated 3 months ago
- ☆56Updated 2 months ago
- QAlign is a new test-time alignment approach that improves language model performance by using Markov chain Monte Carlo methods.☆23Updated last week
- Simple repository for training small reasoning models☆37Updated 6 months ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆74Updated 5 months ago
- Source code for the collaborative reasoner research project at Meta FAIR.☆103Updated 4 months ago
- Official repo for Learning to Reason for Long-Form Story Generation☆68Updated 4 months ago
- II-Thought-RL is our initial attempt at developing a large-scale, multi-domain Reinforcement Learning (RL) dataset☆27Updated 4 months ago
- ☆40Updated 8 months ago
- ☆88Updated last year
- A Qwen .5B reasoning model trained on OpenR1-Math-220k☆14Updated 6 months ago
- accompanying material for sleep-time compute paper☆107Updated 4 months ago
- Large Language Model (LLM) powered evaluator for Retrieval Augmented Generation (RAG) pipelines.☆30Updated last year
- LLM reads a paper and produce a working prototype☆57Updated 4 months ago
- Just a bunch of benchmark logs for different LLMs☆120Updated last year
- The official implementation for paper "Agentic-R1: Distilled Dual-Strategy Reasoning"☆94Updated this week
- Score LLM pretraining data with classifiers☆55Updated last year
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆96Updated last month