codezakh / DataEnvGymLinks
A testbed for agents and environments that can automatically improve models through data generation.
☆27Updated 9 months ago
Alternatives and similar repositories for DataEnvGym
Users that are interested in DataEnvGym are comparing it to the libraries listed below
Sorting:
- Official Code Repository for EnvGen: Generating and Adapting Environments via LLMs for Training Embodied Agents (COLM 2024)☆39Updated last year
- ☆33Updated 11 months ago
- The official implementation of Self-Exploring Language Models (SELM)☆63Updated last year
- Code for Paper: Autonomous Evaluation and Refinement of Digital Agents [COLM 2024]☆147Updated last year
- Official Repo for InSTA: Towards Internet-Scale Training For Agents☆55Updated 5 months ago
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆112Updated 4 months ago
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated last year
- ☆51Updated 10 months ago
- implementation of dualformer☆24Updated 9 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- ☆42Updated 5 months ago
- Natural Language Reinforcement Learning☆100Updated 4 months ago
- Reinforcing General Reasoning without Verifiers☆92Updated 5 months ago
- CodeUltraFeedback: aligning large language models to coding preferences (TOSEM 2025)☆72Updated last year
- ☆28Updated 9 months ago
- Official implementation of Regularized Policy Gradient (RPG) (https://arxiv.org/abs/2505.17508)☆54Updated last month
- ☆65Updated 9 months ago
- Multi-Agent Verification: Scaling Test-Time Compute with Multiple Verifiers☆23Updated 9 months ago
- Code for "Reasoning to Learn from Latent Thoughts"☆123Updated 8 months ago
- Defeating the Training-Inference Mismatch via FP16☆161Updated 3 weeks ago
- The code implementation of MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Models…☆38Updated last year
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆32Updated last year
- Learning to Retrieve by Trying - Source code for Grounding by Trying: LLMs with Reinforcement Learning-Enhanced Retrieval☆51Updated last year
- [NeurIPS 2025 Spotlight] Co-Evolving LLM Coder and Unit Tester via Reinforcement Learning☆139Updated 2 months ago
- The repository contains code for Adaptive Data Optimization☆29Updated last year
- B-STAR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners☆86Updated 6 months ago
- ☆52Updated 7 months ago
- Official PyTorch implementation and models for paper "Diffusion Beats Autoregressive in Data-Constrained Settings". We find diffusion mod…☆113Updated last month
- ☆84Updated last month
- [NeurIPS'24 LanGame workshop] On The Planning Abilities of OpenAI's o1 Models: Feasibility, Optimality, and Generalizability☆41Updated 5 months ago