stanford-cs336 / assignment5-alignmentLinks
☆71Updated 3 months ago
Alternatives and similar repositories for assignment5-alignment
Users that are interested in assignment5-alignment are comparing it to the libraries listed below
Sorting:
- An extension of the nanoGPT repository for training small MOE models.☆210Updated 8 months ago
- Student version of Assignment 2 for Stanford CS336 - Language Modeling From Scratch☆111Updated 3 months ago
- nanoGRPO is a lightweight implementation of Group Relative Policy Optimization (GRPO)☆125Updated 6 months ago
- minimal GRPO implementation from scratch☆99Updated 8 months ago
- ☆94Updated 5 months ago
- rl from zero pretrain, can it be done? yes.☆280Updated last month
- Physics of Language Models, Part 4☆255Updated 3 months ago
- A simplified implementation for experimenting with RLVR on GSM8K, This repository provides a starting point for exploring reasoning.☆144Updated 9 months ago
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.☆301Updated this week
- Tina: Tiny Reasoning Models via LoRA☆304Updated last month
- Open source interpretability artefacts for R1.☆163Updated 6 months ago
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆233Updated 3 months ago
- [Preprint] RLVE: Scaling Up Reinforcement Learning for Language Models with Adaptive Verifiable Environments☆88Updated this week
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 8 months ago
- SPIRAL: Self-Play on Zero-Sum Games Incentivizes Reasoning via Multi-Agent Multi-Turn Reinforcement Learning☆161Updated 2 months ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆111Updated last month
- AIRA-dojo: a framework for developing and evaluating AI research agents☆110Updated last month
- Notes and commented code for RLHF (PPO)☆114Updated last year
- ☆451Updated 2 months ago
- A brief and partial summary of RLHF algorithms.☆136Updated 8 months ago
- [NeurIPS 2025] Reinforcement Learning for Reasoning in Large Language Models with One Training Example☆375Updated last month
- Evaluation of LLMs on latest math competitions☆178Updated 3 weeks ago
- Code for the paper: "Learning to Reason without External Rewards"☆373Updated 4 months ago
- Compiling useful links, papers, benchmarks, ideas, etc.☆45Updated 8 months ago
- ☆106Updated 3 weeks ago
- ☆225Updated 3 weeks ago
- Meta Agents Research Environments is a comprehensive platform designed to evaluate AI agents in dynamic, realistic scenarios. Unlike stat…☆348Updated last week
- Minimal hackable GRPO implementation☆300Updated 9 months ago
- [COLM 2025] Code for Paper: Learning Adaptive Parallel Reasoning with Language Models☆132Updated 3 months ago
- ☆108Updated last year