facebookresearch / swe-rlLinks
Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"
☆591Updated 5 months ago
Alternatives and similar repositories for swe-rl
Users that are interested in swe-rl are comparing it to the libraries listed below
Sorting:
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆526Updated 3 weeks ago
- [ICML 2025 Oral] CodeI/O: Condensing Reasoning Patterns via Code Input-Output Prediction☆544Updated 3 months ago
- Scaling Data for SWE-agents☆378Updated this week
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆633Updated last month
- ☆621Updated last month
- SkyRL: A Modular Full-stack RL Library for LLMs☆738Updated last week
- Automatic evals for LLMs☆519Updated 2 months ago
- Building Open LLM Web Agents with Self-Evolving Online Curriculum RL☆441Updated 2 months ago
- AWM: Agent Workflow Memory☆303Updated 6 months ago
- A MemAgent framework that can be extrapolated to 3.5M, along with a training framework for RL training of any agent workflow.☆605Updated 3 weeks ago
- MLGym A New Framework and Benchmark for Advancing AI Research Agents☆546Updated 2 weeks ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆234Updated 3 months ago
- A benchmark for LLMs on complicated tasks in the terminal☆569Updated this week
- ☆259Updated last month
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆254Updated 3 months ago
- Code for the paper: "Learning to Reason without External Rewards"☆347Updated last month
- Tina: Tiny Reasoning Models via LoRA☆275Updated last week
- An agent benchmark with tasks in a simulated software company.☆534Updated this week
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆438Updated this week
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆872Updated last week
- A project to improve skills of large language models☆529Updated this week
- ☆955Updated 7 months ago
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆432Updated 3 months ago
- 🌍 Repository for "AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agent", ACL'24 Best Resource Pap…☆238Updated 2 weeks ago
- ☆297Updated this week
- Scaling RL on advanced reasoning models☆566Updated 2 weeks ago
- [ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI☆416Updated 4 months ago
- AgentLab: An open-source framework for developing, testing, and benchmarking web agents on diverse tasks, designed for scalability and re…☆386Updated last week
- Chain-of-Agents: End-to-End Agent Foundation Models via Multi-Agent Distillation and Agentic RL.☆305Updated this week
- ☆784Updated 2 months ago