stanford-cs336 / assignment5-alignmentLinks
☆105Updated 6 months ago
Alternatives and similar repositories for assignment5-alignment
Users that are interested in assignment5-alignment are comparing it to the libraries listed below
Sorting:
- Ideas for projects related to Tinker☆164Updated 3 months ago
- Student version of Assignment 2 for Stanford CS336 - Language Modeling From Scratch☆164Updated 6 months ago
- Physics of Language Models: Part 4.2, Canon Layers at Scale where Synthetic Pretraining Resonates in Reality☆317Updated last month
- ☆394Updated last week
- [Preprint] RLVE: Scaling Up Reinforcement Learning for Language Models with Adaptive Verifiable Environments☆177Updated 3 weeks ago
- ☆113Updated 7 months ago
- An extension of the nanoGPT repository for training small MOE models.☆236Updated 11 months ago
- minimal GRPO implementation from scratch☆102Updated 10 months ago
- ☆466Updated 5 months ago
- Minimal hackable GRPO implementation☆323Updated last year
- [NeurIPS 2025] Reinforcement Learning for Reasoning in Large Language Models with One Training Example☆405Updated 2 months ago
- [ICLR 2026] Learning to Reason without External Rewards☆389Updated 2 weeks ago
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.☆361Updated this week
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆116Updated 6 months ago
- ☆388Updated 3 months ago
- ☆413Updated last year
- rl from zero pretrain, can it be done? yes.☆286Updated 4 months ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆334Updated 3 months ago
- nanoGRPO is a lightweight implementation of Group Relative Policy Optimization (GRPO)☆143Updated 9 months ago
- [ICLR 2026] Tina: Tiny Reasoning Models via LoRA☆319Updated 4 months ago
- A Gym for Agentic LLMs☆444Updated 3 weeks ago
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆273Updated 3 months ago
- SPIRAL: Self-Play on Zero-Sum Games Incentivizes Reasoning via Multi-Agent Multi-Turn Reinforcement Learning☆175Updated 4 months ago
- Open source interpretability artefacts for R1.☆170Updated 9 months ago
- Notes and commented code for RLHF (PPO)☆124Updated last year
- A brief and partial summary of RLHF algorithms.☆144Updated 11 months ago
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆350Updated last week
- ☆135Updated 2 weeks ago
- ☆232Updated 2 months ago
- A Framework for LLM-based Multi-Agent Reinforced Training and Inference☆418Updated 2 months ago