yejun688 / CVPR2025_oral_paper_listLinks
π A curated list of CVPR 2025 Oral paper. Total 96
β59Updated 2 weeks ago
Alternatives and similar repositories for CVPR2025_oral_paper_list
Users that are interested in CVPR2025_oral_paper_list are comparing it to the libraries listed below
Sorting:
- A paper list for spatial reasoningβ521Updated last week
- β67Updated 8 months ago
- SpaceR: The first MLLM empowered by SG-RLVR for video spatial reasoningβ99Updated 5 months ago
- [NeurIPS'24] This repository is the implementation of "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models"β296Updated last year
- Official implementation of Spatial-MLLM: Boosting MLLM Capabilities in Visual-based Spatial Intelligenceβ408Updated 2 weeks ago
- [CVPR 2025] The code for paper ''Video-3D LLM: Learning Position-Aware Video Representation for 3D Scene Understanding''.β187Updated 6 months ago
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.β325Updated 3 months ago
- Awesome Spatial Intelligence (Personal Use)β44Updated 3 weeks ago
- Vision Manus: Your versatile Visual AI assistantβ303Updated 2 months ago
- [NeurIPS 2025] Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"β209Updated this week
- [NeurIPS 2025] 3DRS: MLLMs Need 3D-Aware Representation Supervision for Scene Understandingβ134Updated last week
- β118Updated last year
- π up-to-date & curated list of awesome 3D Visual Grounding papers, methods & resources.β250Updated 2 weeks ago
- β30Updated last month
- [CVPR'25] SeeGround: See and Ground for Zero-Shot Open-Vocabulary 3D Visual Groundingβ192Updated 7 months ago
- π This is a repository for organizing papers, codes and other resources related to Visual Reinforcement Learning.β365Updated 3 weeks ago
- STI-Bench : Are MLLMs Ready for Precise Spatial-Temporal World Understanding?β33Updated 5 months ago
- [NeurIPS 2025]βοΈ Reason-RFT: Reinforcement Fine-Tuning for Visual Reasoning.β244Updated 2 months ago
- [NeurIPS 2025] DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledgeβ251Updated 3 months ago
- MetaSpatial leverages reinforcement learning to enhance 3D spatial reasoning in vision-language models (VLMs), enabling more structured, β¦β196Updated 7 months ago
- The code for paper 'Learning from Videos for 3D World: Enhancing MLLMs with 3D Vision Geometry Priors'β183Updated 3 weeks ago
- Latest Advances on Embodied Multimodal LLMs (or Vison-Language-Action Models).β122Updated last year
- [ICCV 2025] Official repository of the paper "Talking to DINO: Bridging Self-Supervised Vision Backbones with Language for Open-Vocabularβ¦β155Updated last month
- [CVPR 2025] DeCLIP: Decoupled Learning for Open-Vocabulary Dense Perceptionβ146Updated 6 months ago
- A vue-based project page template for academic papers. (in development) https://junyaohu.github.io/academic-project-page-template-vueβ304Updated 5 months ago
- Official code for the CVPR 2025 paper "Navigation World Models".β475Updated 3 weeks ago
- [ECCV2024] Any2Point: Empowering Any-modality Large Models for Efficient 3D Understandingβ125Updated last year
- [CVPR 2025] Official PyTorch Implementation of GLUS: Global-Local Reasoning Unified into A Single Large Language Model for Video Segmentaβ¦β64Updated 5 months ago
- Survey: https://arxiv.org/pdf/2507.20198β243Updated last month
- [ECCV 2024] ShapeLLM: Universal 3D Object Understanding for Embodied Interactionβ216Updated last year