hq-King / Affordance-R1Links
code for affordance-r1
☆51Updated last month
Alternatives and similar repositories for Affordance-R1
Users that are interested in Affordance-R1 are comparing it to the libraries listed below
Sorting:
- ☆54Updated last year
- OpenScan: A Benchmark for Generalized Open-Vocabulary 3D Scene Understanding☆19Updated last month
- 3DAffordSplat: Efficient Affordance Reasoning with 3D Gaussians (ACM MM 25)☆67Updated 6 months ago
- Official implementation of the paper "Unifying 3D Vision-Language Understanding via Promptable Queries"☆84Updated last year
- [ICLR 2025] SPA: 3D Spatial-Awareness Enables Effective Embodied Representation☆172Updated 7 months ago
- Code implementation of the paper 'FIction: 4D Future Interaction Prediction from Video'☆17Updated 10 months ago
- [ICCV 2025] RAGNet: Large-scale Reasoning-based Affordance Segmentation Benchmark towards General Grasping☆33Updated 2 months ago
- [ECCV 2024] Empowering 3D Visual Grounding with Reasoning Capabilities☆80Updated last year
- [NeurIPS 2024] Official code repository for MSR3D paper☆69Updated last month
- Unifying 2D and 3D Vision-Language Understanding☆119Updated 6 months ago
- Code&Data for Grounded 3D-LLM with Referent Tokens☆131Updated last year
- [CVPR 2024] Visual Programming for Zero-shot Open-Vocabulary 3D Visual Grounding☆62Updated last year
- ☆44Updated last year
- [CVPR-2025] GREAT: Geometry-Intention Collaborative Inference for Open-Vocabulary 3D Object Affordance Grounding☆34Updated 5 months ago
- [CVPR 2025] GEAL: Generalizable 3D Affordance Learning with Cross-Modal Consistency☆41Updated 2 months ago
- [CVPR 2024] Situational Awareness Matters in 3D Vision Language Reasoning☆43Updated last year
- Code of 3DMIT: 3D MULTI-MODAL INSTRUCTION TUNING FOR SCENE UNDERSTANDING☆31Updated last year
- [CoRL 2024] VLM-Grounder: A VLM Agent for Zero-Shot 3D Visual Grounding☆128Updated 8 months ago
- Official Code for the NeurIPS'23 paper "3D-Aware Visual Question Answering about Parts, Poses and Occlusions"☆19Updated last year
- VLA-RFT: Vision-Language-Action Models with Reinforcement Fine-Tuning☆119Updated 3 months ago
- Official implementation of Spatial-Forcing: Implicit Spatial Representation Alignment for Vision-language-action Model☆162Updated 2 weeks ago
- [CVPR 2025]Lift3D Foundation Policy: Lifting 2D Large-Scale Pretrained Models for Robust 3D Robotic Manipulation☆173Updated 7 months ago
- OmniSpatial: Towards Comprehensive Spatial Reasoning Benchmark for Vision Language Models☆78Updated last week
- EgoDex: Learning Dexterous Manipulation from Large-Scale Egocentric Video☆123Updated 5 months ago
- [NeurIPS 2024] Lexicon3D: Probing Visual Foundation Models for Complex 3D Scene Understanding☆99Updated 11 months ago
- Official PyTorch implementation for ICML 2025 paper: UP-VLA.☆54Updated last week
- [CVPR 2025] Tra-MoE: Learning Trajectory Prediction Model from Multiple Domains for Adaptive Policy Conditioning☆54Updated 9 months ago
- ☆47Updated 6 months ago
- [NeurIPS 2025] 3DRS: MLLMs Need 3D-Aware Representation Supervision for Scene Understanding☆140Updated last month
- [Nips 2025] EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆126Updated 5 months ago