ASTRAL-Group / AlphaOneLinks
AlphaOne: Reasoning Models Thinking Slow and Fast at Test Time
☆63Updated 2 weeks ago
Alternatives and similar repositories for AlphaOne
Users that are interested in AlphaOne are comparing it to the libraries listed below
Sorting:
- Official implementation of paper "ROCKET-1: Mastering Open-World Interaction with Visual-Temporal Context Prompting" (CVPR 2025)☆41Updated 2 months ago
- ☆106Updated 2 months ago
- ☆38Updated this week
- ☆49Updated 2 months ago
- Official Implementation of ARPO: End-to-End Policy Optimization for GUI Agents with Experience Replay☆78Updated 3 weeks ago
- ☆78Updated 5 months ago
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆91Updated last month
- ☆42Updated last month
- [NeurIPS 2024] A task generation and model evaluation system for multimodal language models.☆71Updated 6 months ago
- ☆44Updated 5 months ago
- ☆112Updated this week
- [ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models …☆64Updated 3 weeks ago
- ☆35Updated 2 weeks ago
- G1: Bootstrapping Perception and Reasoning Abilities of Vision-Language Model via Reinforcement Learning☆64Updated last month
- Official repository for "RLVR-World: Training World Models with Reinforcement Learning", https://arxiv.org/abs/2505.13934☆45Updated 2 weeks ago
- This repo contains the code for "MEGA-Bench Scaling Multimodal Evaluation to over 500 Real-World Tasks" [ICLR2025]☆68Updated 2 months ago
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆68Updated last year
- Official implementation of the paper "MMInA: Benchmarking Multihop Multimodal Internet Agents"☆44Updated 3 months ago
- Evaluate Multimodal LLMs as Embodied Agents☆52Updated 4 months ago
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning"☆118Updated 3 weeks ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆104Updated 3 weeks ago
- Think or Not? Selective Reasoning via Reinforcement Learning for Vision-Language Models☆36Updated last week
- Scaffold Prompting to promote LMMs☆43Updated 6 months ago
- Official code for the paper: WALL-E: World Alignment by NeuroSymbolic Learning improves World Model-based LLM Agents☆38Updated last month
- [ICLR 2025] Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision☆64Updated 11 months ago
- Multimodal RewardBench☆41Updated 4 months ago
- [CVPR2024] This is the official implement of MP5☆102Updated 11 months ago
- Pixel-Level Reasoning Model trained with RL☆140Updated last week
- A Self-Training Framework for Vision-Language Reasoning☆80Updated 5 months ago
- [ACL 2024] PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain☆105Updated last year