PrimeIntellect-ai / verifiersLinks
Our library for RL environments + evals
☆3,791Updated this week
Alternatives and similar repositories for verifiers
Users that are interested in verifiers are comparing it to the libraries listed below
Sorting:
- Post-training with Tinker☆2,770Updated this week
- [NeurIPS 2025 Spotlight] Reasoning Environments for Reinforcement Learning with Verifiable Rewards☆1,326Updated 2 weeks ago
- Textbook on reinforcement learning from human feedback☆1,478Updated this week
- Async RL Training at Scale☆1,034Updated this week
- Training Large Language Model to Reason in a Continuous Latent Space☆1,491Updated 5 months ago
- Synthetic data curation for post-training and structured data extraction☆1,618Updated last week
- AllenAI's post-training codebase☆3,551Updated this week
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,518Updated this week
- NanoGPT (124M) in 2 minutes☆4,515Updated this week
- Recipes to scale inference-time compute of open models☆1,124Updated 8 months ago
- Implementing DeepSeek R1's GRPO algorithm from scratch☆1,759Updated 9 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,279Updated last week
- Atropos is a Language Model Reinforcement Learning Environments framework for collecting and evaluating LLM trajectories through diverse …☆843Updated last week
- RAGEN leverages reinforcement learning to train LLM reasoning agents in interactive, stochastic environments.☆2,503Updated last week
- ☆2,568Updated this week
- Minimalistic 4D-parallelism distributed training framework for education purpose☆2,033Updated 5 months ago
- Democratizing Reinforcement Learning for LLMs☆5,060Updated this week
- ☆1,385Updated 4 months ago
- A bibliography and survey of the papers surrounding o1☆1,214Updated last year
- Renderer for the harmony response format to be used with gpt-oss☆4,159Updated last month
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆1,295Updated 2 weeks ago
- OpenAI Frontier Evals☆990Updated last month
- Agent Reinforcement Trainer: train multi-step agents for real-world tasks using GRPO. Give your agents on-the-job training. Reinforcement…☆8,335Updated last week
- Sky-T1: Train your own O1 preview model within $450☆3,370Updated 6 months ago
- Scalable RL solution for advanced reasoning of language models☆1,803Updated 10 months ago
- Minimalistic large language model 3D-parallelism training☆2,529Updated last month
- Search-R1: An Efficient, Scalable RL Training Framework for Reasoning & Search Engine Calling interleaved LLM based on veRL☆3,889Updated 2 months ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆3,074Updated last week
- ☆1,033Updated last year
- Sharing both practical insights and theoretical knowledge about LLM evaluation that we gathered while managing the Open LLM Leaderboard a…☆2,044Updated last month