aw31 / openai-imo-2025-proofsLinks
☆476Updated 2 months ago
Alternatives and similar repositories for openai-imo-2025-proofs
Users that are interested in openai-imo-2025-proofs are comparing it to the libraries listed below
Sorting:
- Testing baseline LLMs performance across various models☆310Updated last month
- Async RL Training at Scale☆650Updated this week
- ☆226Updated 3 months ago
- Open source interpretability artefacts for R1.☆159Updated 5 months ago
- ☆471Updated 4 months ago
- ☆295Updated last week
- Evaluation of LLMs on latest math competitions☆165Updated last week
- Technical report of Kimina-Prover Preview.☆327Updated 2 months ago
- Open-source framework for the research and development of foundation models.☆452Updated this week
- ☆466Updated last year
- Public repository for "The Surprising Effectiveness of Test-Time Training for Abstract Reasoning"☆328Updated 10 months ago
- ☆187Updated last month
- Atropos is a Language Model Reinforcement Learning Environments framework for collecting and evaluating LLM trajectories through diverse …☆697Updated this week
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆280Updated this week
- Our solution for the arc challenge 2024☆178Updated 3 months ago
- Training-Ready RL Environments + Evals☆111Updated this week
- A toolkit for describing model features and intervening on those features to steer behavior.☆202Updated 10 months ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆544Updated last month
- MLGym A New Framework and Benchmark for Advancing AI Research Agents☆556Updated last month
- open source interpretability platform 🧠☆425Updated this week
- Long context evaluation for large language models☆222Updated 6 months ago
- ☆101Updated last week
- ☆430Updated 3 months ago
- [NeurIPS 2025 Spotlight] Reasoning Environments for Reinforcement Learning with Verifiable Rewards☆1,140Updated last week
- Single File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"☆534Updated 2 months ago
- ☆224Updated 3 months ago
- ☆207Updated 5 months ago
- rl from zero pretrain, can it be done? yes.☆269Updated this week
- Releases from OpenAI Preparedness☆860Updated 3 weeks ago
- ☆725Updated 2 weeks ago