yejun688 / CVPR2025_oral_paper_listLinks
π A curated list of CVPR 2025 Oral paper. Total 96
β51Updated 2 months ago
Alternatives and similar repositories for CVPR2025_oral_paper_list
Users that are interested in CVPR2025_oral_paper_list are comparing it to the libraries listed below
Sorting:
- β59Updated 6 months ago
- SpaceR: The first MLLM empowered by SG-RLVR for video spatial reasoningβ83Updated 3 months ago
- A paper list for spatial reasoningβ143Updated 4 months ago
- Vision Manus: Your versatile Visual AI assistantβ282Updated this week
- Awesome Spatial Intelligence (Personal Use)β27Updated 3 months ago
- [NeurIPS'24] This repository is the implementation of "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models"β266Updated 10 months ago
- [CVPR 2025] The code for paper ''Video-3D LLM: Learning Position-Aware Video Representation for 3D Scene Understanding''.β167Updated 4 months ago
- [NeurIPS 2025] Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"β173Updated 2 weeks ago
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.β309Updated last month
- Official implementation of Spatial-MLLM: Boosting MLLM Capabilities in Visual-based Spatial Intelligenceβ363Updated 3 months ago
- π This is a repository for organizing papers, codes and other resources related to Visual Reinforcement Learning.β299Updated last week
- [ICCV2025] Official repository of the paper "Talking to DINO: Bridging Self-Supervised Vision Backbones with Language for Open-Vocabularyβ¦β102Updated last week
- π up-to-date & curated list of awesome 3D Visual Grounding papers, methods & resources.β225Updated 3 weeks ago
- A curated list of large VLM-based VLA models for robotic manipulation.β203Updated 2 weeks ago
- [CVPR'25] SeeGround: See and Ground for Zero-Shot Open-Vocabulary 3D Visual Groundingβ178Updated 5 months ago
- [NeurIPS 2025]βοΈ Reason-RFT: Reinforcement Fine-Tuning for Visual Reasoning.β218Updated 2 weeks ago
- MetaSpatial leverages reinforcement learning to enhance 3D spatial reasoning in vision-language models (VLMs), enabling more structured, β¦β190Updated 5 months ago
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policyβ129Updated this week
- [CVPR 2025] DeCLIP: Decoupled Learning for Open-Vocabulary Dense Perceptionβ129Updated 4 months ago
- Official repo and evaluation implementation of VSI-Benchβ603Updated 2 months ago
- β16Updated 4 months ago
- [NeurIP S2025] DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledgeβ190Updated last month
- [NeurIPS 2025] MLLMs Need 3D-Aware Representation Supervision for Scene Understandingβ108Updated 3 weeks ago
- [CVPR2024] GSVA: Generalized Segmentation via Multimodal Large Language Modelsβ149Updated last year
- [CVPR'2022, TPAMI'2024] LAVT: Language-Aware Vision Transformer for Referring Segmentationβ22Updated 8 months ago
- Embodied Question Answering (EQA) benchmark and method (ICCV 2025)β38Updated 2 months ago
- [CVPR2025] FlashSloth: Lightning Multimodal Large Language Models via Embedded Visual Compressionβ50Updated last week
- Official code for the CVPR 2025 paper "Navigation World Models".β411Updated 2 months ago
- STI-Bench : Are MLLMs Ready for Precise Spatial-Temporal World Understanding?β28Updated 3 months ago
- [ICCV25 Oral] Token Activation Map to Visually Explain Multimodal LLMsβ84Updated 2 months ago