MiroMindAI / MiroTrainLinks
MiroTrain is an efficient and algorithm-first system for post-training large agentic models.
☆30Updated this week
Alternatives and similar repositories for MiroTrain
Users that are interested in MiroTrain are comparing it to the libraries listed below
Sorting:
- The official code repository for the FullFront benchmark☆18Updated 2 months ago
- ☆51Updated 2 months ago
- Optimizing Anytime Reasoning via Budget Relative Policy Optimization☆43Updated 3 weeks ago
- G1: Bootstrapping Perception and Reasoning Abilities of Vision-Language Model via Reinforcement Learning☆77Updated 2 months ago
- [Preprint] On the Generalization of SFT: A Reinforcement Learning Perspective with Reward Rectification.☆59Updated this week
- Official Repository of LatentSeek☆56Updated 2 months ago
- Think or Not? Selective Reasoning via Reinforcement Learning for Vision-Language Models☆40Updated 3 weeks ago
- ☆54Updated 2 months ago
- A repo for open research on building large reasoning models☆87Updated this week
- ☆96Updated 3 months ago
- [arXiv2505] Think Silently, Think Fast: Dynamic Latent Compression of LLM Reasoning Chains☆38Updated last week
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆64Updated 3 weeks ago
- [Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics]: VisuoThink: Empowering LVLM Reasoning with Mul…☆27Updated 2 weeks ago
- instruction-following benchmark for large reasoning models☆36Updated 2 months ago
- Official Implementation of ARPO: End-to-End Policy Optimization for GUI Agents with Experience Replay☆104Updated 2 months ago
- This repo contains the code for "MEGA-Bench Scaling Multimodal Evaluation to over 500 Real-World Tasks" [ICLR2025]☆73Updated last month
- RM-R1: Unleashing the Reasoning Potential of Reward Models☆120Updated last month
- End-to-End Reinforcement Learning for Multi-Turn Tool-Integrated Reasoning☆162Updated last week
- A comrephensive collection of learning from rewards in the post-training and test-time scaling of LLMs, with a focus on both reward model…☆52Updated last month
- Code for "Reasoning to Learn from Latent Thoughts"☆114Updated 4 months ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆86Updated 10 months ago
- Official repository for "RLVR-World: Training World Models with Reinforcement Learning", https://arxiv.org/abs/2505.13934☆74Updated 2 months ago
- ☆46Updated 4 months ago
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆55Updated 7 months ago
- Code accompanying the paper "Noise Contrastive Alignment of Language Models with Explicit Rewards" (NeurIPS 2024)☆55Updated 9 months ago
- ☆38Updated 3 weeks ago
- Code for paper "Patch-Level Training for Large Language Models"☆86Updated 8 months ago
- ☆83Updated 2 weeks ago
- ACL'2025: SoftCoT: Soft Chain-of-Thought for Efficient Reasoning with LLMs. and preprint: SoftCoT++: Test-Time Scaling with Soft Chain-of…☆38Updated 2 months ago
- ☆39Updated 3 months ago