openai / preparednessLinks
Releases from OpenAI Preparedness
☆793Updated this week
Alternatives and similar repositories for preparedness
Users that are interested in preparedness are comparing it to the libraries listed below
Sorting:
- Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆566Updated 4 months ago
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆800Updated 3 weeks ago
- MLGym A New Framework and Benchmark for Advancing AI Research Agents☆529Updated 3 weeks ago
- This repo contains the dataset and code for the paper "SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software E…☆1,435Updated 2 months ago
- Training Large Language Model to Reason in a Continuous Latent Space☆1,185Updated 5 months ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆498Updated 2 months ago
- Pretraining code for a large-scale depth-recurrent language model☆801Updated this week
- A Self-adaptation Framework🐙 that adapts LLMs for unseen tasks in real-time!☆1,123Updated 5 months ago
- procedural reasoning datasets☆960Updated last week
- Dream 7B, a large diffusion language model☆839Updated 3 weeks ago
- ☆585Updated 3 months ago
- ☆682Updated last month
- Self-Adapting Language Models☆697Updated 3 weeks ago
- Verifiers for LLM Reinforcement Learning☆1,495Updated this week
- An agent benchmark with tasks in a simulated software company.☆488Updated last week
- SkyRL: A Modular Full-stack RL Library for LLMs☆603Updated this week
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆341Updated 7 months ago
- ☆206Updated 3 weeks ago
- Atom of Thoughts for Markov LLM Test-Time Scaling☆579Updated last month
- Scaling Data for SWE-agents☆293Updated this week
- Repository for Zochi's Research☆238Updated last week
- TTRL: Test-Time Reinforcement Learning☆704Updated 3 weeks ago
- CodeScientist: An automated scientific discovery system for code-based experiments☆275Updated 3 weeks ago
- AgentLab: An open-source framework for developing, testing, and benchmarking web agents on diverse tasks, designed for scalability and re…☆361Updated this week
- Automatic evals for LLMs☆467Updated 2 weeks ago
- ☆2,157Updated last week
- [COLM 2025] LIMO: Less is More for Reasoning☆980Updated last week
- Training teachers with reinforcement learning able to make LLMs learn how to reason for test time scaling.☆295Updated 3 weeks ago
- Seed-Coder is a family of lightweight open-source code LLMs comprising base, instruct and reasoning models, developed by ByteDance Seed.☆527Updated last month
- A MemAgent framework that can be extrapolated to 3.5M, along with a training framework for RL training of any agent workflow.☆385Updated last week