aw31 / openai-imo-2025-proofsLinks
☆470Updated 3 weeks ago
Alternatives and similar repositories for openai-imo-2025-proofs
Users that are interested in openai-imo-2025-proofs are comparing it to the libraries listed below
Sorting:
- Testing baseline LLMs performance across various models☆297Updated last week
- ☆415Updated 2 months ago
- ☆215Updated last month
- Decentralized RL Training at Scale☆416Updated this week
- procedural reasoning datasets☆1,030Updated last week
- Open source interpretability artefacts for R1.☆157Updated 3 months ago
- ☆259Updated 2 weeks ago
- ☆363Updated this week
- ☆174Updated 4 months ago
- Technical report of Kimina-Prover Preview.☆322Updated last month
- rl from zero pretrain, can it be done? yes.☆193Updated this week
- Public repository for "The Surprising Effectiveness of Test-Time Training for Abstract Reasoning"☆324Updated 8 months ago
- ☆462Updated last year
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆518Updated 2 weeks ago
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆238Updated last week
- Pretraining and inference code for a large-scale depth-recurrent language model☆810Updated 3 weeks ago
- Our solution for the arc challenge 2024☆168Updated last month
- Evaluation of LLMs on latest math competitions☆155Updated last week
- Single File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"☆515Updated last month
- MLGym A New Framework and Benchmark for Advancing AI Research Agents☆541Updated 3 weeks ago
- Atropos is a Language Model Reinforcement Learning Environments framework for collecting and evaluating LLM trajectories through diverse …☆582Updated this week
- ☆98Updated last week
- ☆118Updated 7 months ago
- ☆399Updated last month
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆318Updated 9 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆343Updated 8 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆286Updated last week
- Releases from OpenAI Preparedness☆837Updated this week
- Dion optimizer algorithm☆259Updated last week
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆117Updated last month