zjunlp / WorldMindLinks
Aligning Agentic World Models via Knowledgeable Experience Learning
☆23Updated last week
Alternatives and similar repositories for WorldMind
Users that are interested in WorldMind are comparing it to the libraries listed below
Sorting:
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Updated last year
- ☆21Updated 8 months ago
- Source code for our paper: "ARIA: Training Language Agents with Intention-Driven Reward Aggregation".☆25Updated 5 months ago
- From Accuracy to Robustness: A Study of Rule- and Model-based Verifiers in Mathematical Reasoning.☆24Updated 3 months ago
- ☆59Updated 2 weeks ago
- The official code repository for the paper "Mirage or Method? How Model–Task Alignment Induces Divergent RL Conclusions".☆15Updated 4 months ago
- [NeurIPS 2025] Think or Not? Selective Reasoning via Reinforcement Learning for Vision-Language Models☆53Updated 4 months ago
- Official Repo for SvS: A Self-play with Variational Problem Synthesis strategy for RLVR training☆54Updated last month
- ☆51Updated last year
- ☆43Updated last month
- ☆36Updated 3 months ago
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆70Updated 6 months ago
- ☆14Updated last year
- Resources and paper list for 'Scaling Environments for Agents'. This repository accompanies our survey on how environments contribute to …☆57Updated this week
- instruction-following benchmark for large reasoning models☆44Updated 5 months ago
- ☆23Updated last year
- ☆72Updated 7 months ago
- [EMNLP 2025] LightThinker: Thinking Step-by-Step Compression☆131Updated 9 months ago
- ☆43Updated 5 months ago
- ☆47Updated 3 months ago
- ☆19Updated 10 months ago
- ☆31Updated 4 months ago
- Code for "Language Models Can Learn from Verbal Feedback Without Scalar Rewards"☆56Updated 3 weeks ago
- Official code for paper "SPA-RL: Reinforcing LLM Agent via Stepwise Progress Attribution"☆62Updated 4 months ago
- A comrephensive collection of learning from rewards in the post-training and test-time scaling of LLMs, with a focus on both reward model…☆60Updated 7 months ago
- Code for Evolving Language Models without Labels: Majority Drives Selection, Novelty Promotes Variation (EVOL-RL).☆42Updated 3 months ago
- R1-Searcher++: Incentivizing the Dynamic Knowledge Acquisition of LLMs via Reinforcement Learning☆71Updated 8 months ago
- [NeurIPS'25 Spotlight] ARM: Adaptive Reasoning Model☆64Updated 3 months ago
- Optimizing Anytime Reasoning via Budget Relative Policy Optimization☆51Updated 6 months ago
- Code for "CREAM: Consistency Regularized Self-Rewarding Language Models", ICLR 2025.☆28Updated 11 months ago