para-lost / AutoPresentLinks
Code for the paper "AutoPresent: Designing Structured Visuals From Scratch" (CVPR 2025)
☆134Updated 5 months ago
Alternatives and similar repositories for AutoPresent
Users that are interested in AutoPresent are comparing it to the libraries listed below
Sorting:
- ☆60Updated 2 months ago
- [NeurIPS 2024] A task generation and model evaluation system for multimodal language models.☆73Updated 11 months ago
- Codes for Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models☆263Updated 3 months ago
- Multimodal RewardBench☆54Updated 8 months ago
- ☆111Updated last month
- Data and Code for CVPR 2025 paper "MMVU: Measuring Expert-Level Multi-Discipline Video Understanding"☆75Updated 8 months ago
- This repo contains the code for "MEGA-Bench Scaling Multimodal Evaluation to over 500 Real-World Tasks" [ICLR 2025]☆77Updated 4 months ago
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning" [NeurIPS25]☆164Updated 5 months ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆147Updated last month
- [ICCV 2025 Highlight] The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆174Updated 7 months ago
- Pixel-Level Reasoning Model trained with RL [NeuIPS25]☆244Updated last month
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR 2024]☆231Updated 7 months ago
- Official Code for "Mini-o3: Scaling Up Reasoning Patterns and Interaction Turns for Visual Search"☆357Updated last month
- Official implementation of "PyVision: Agentic Vision with Dynamic Tooling."☆131Updated 3 months ago
- G1: Bootstrapping Perception and Reasoning Abilities of Vision-Language Model via Reinforcement Learning☆87Updated 5 months ago
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆92Updated last year
- (ACL 2025) MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at Scale☆48Updated 5 months ago
- SophiaVL-R1: Reinforcing MLLMs Reasoning with Thinking Reward☆85Updated 3 months ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆88Updated last year
- [CVPR 2024] Prompt Highlighter: Interactive Control for Multi-Modal LLMs☆155Updated last year
- MLLM-Tool: A Multimodal Large Language Model For Tool Agent Learning☆134Updated 3 weeks ago
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆196Updated last year
- Official Implementation of ARPO: End-to-End Policy Optimization for GUI Agents with Experience Replay☆133Updated 5 months ago
- [ACL2025 Oral & Award] Evaluate Image/Video Generation like Humans - Fast, Explainable, Flexible☆107Updated 2 months ago
- Uni-CoT: Towards Unified Chain-of-Thought Reasoning Across Text and Vision☆166Updated last month
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆229Updated 4 months ago
- MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources☆208Updated last month
- A Large-scale Dataset for training and evaluating model's ability on Dense Text Image Generation☆81Updated last month
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆38Updated 2 weeks ago
- MMSearch-R1 is an end-to-end RL framework that enables LMMs to perform on-demand, multi-turn search with real-world multimodal search too…☆345Updated 2 months ago