hq-King / Affordance-R1Links
code for affordance-r1
☆47Updated 3 weeks ago
Alternatives and similar repositories for Affordance-R1
Users that are interested in Affordance-R1 are comparing it to the libraries listed below
Sorting:
- [ICCV 2025] RAGNet: Large-scale Reasoning-based Affordance Segmentation Benchmark towards General Grasping☆30Updated 2 weeks ago
- [CVPR-2025] GREAT: Geometry-Intention Collaborative Inference for Open-Vocabulary 3D Object Affordance Grounding☆30Updated 3 months ago
- 3DAffordSplat: Efficient Affordance Reasoning with 3D Gaussians (ACM MM 25)☆62Updated 4 months ago
- Official implementation of the paper "Unifying 3D Vision-Language Understanding via Promptable Queries"☆82Updated last year
- ☆52Updated last year
- [ECCV 2024] Empowering 3D Visual Grounding with Reasoning Capabilities☆80Updated last year
- Official PyTorch implementation for ICML 2025 paper: UP-VLA.☆51Updated 5 months ago
- Code implementation of the paper 'FIction: 4D Future Interaction Prediction from Video'☆17Updated 8 months ago
- OpenScan: A Benchmark for Generalized Open-Vocabulary 3D Scene Understanding☆19Updated last month
- [ICLR 2025] SPA: 3D Spatial-Awareness Enables Effective Embodied Representation☆169Updated 5 months ago
- [NeurIPS 2024] Official code repository for MSR3D paper☆68Updated last week
- [CVPR 2025] Tra-MoE: Learning Trajectory Prediction Model from Multiple Domains for Adaptive Policy Conditioning☆51Updated 8 months ago
- [CVPR 2024] Visual Programming for Zero-shot Open-Vocabulary 3D Visual Grounding☆59Updated last year
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆45Updated 2 years ago
- Omni6D: Large-Vocabulary 3D Object Dataset for Category-Level 6D Object Pose Estimation☆47Updated 11 months ago
- [CVPR 2025]Lift3D Foundation Policy: Lifting 2D Large-Scale Pretrained Models for Robust 3D Robotic Manipulation☆170Updated 5 months ago
- Unifying 2D and 3D Vision-Language Understanding☆116Updated 4 months ago
- Official Code for the NeurIPS'23 paper "3D-Aware Visual Question Answering about Parts, Poses and Occlusions"☆19Updated last year
- Code&Data for Grounded 3D-LLM with Referent Tokens☆130Updated 11 months ago
- OmniSpatial: Towards Comprehensive Spatial Reasoning Benchmark for Vision Language Models☆76Updated 2 months ago
- VLA-RFT: Vision-Language-Action Models with Reinforcement Fine-Tuning☆96Updated 2 months ago
- [NeurIPS 2024 D&B] Point Cloud Matters: Rethinking the Impact of Different Observation Spaces on Robot Learning☆89Updated last year
- [Nips 2025] EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆122Updated 4 months ago
- [RAL 2024] OpenObj: Open-Vocabulary Object-Level Neural Radiance Fields with Fine-Grained Understanding☆31Updated 9 months ago
- ☆41Updated 5 months ago
- [ICCV 2023] Understanding 3D Object Interaction from a Single Image☆47Updated last year
- ☆40Updated last year
- EgoDex: Learning Dexterous Manipulation from Large-Scale Egocentric Video☆82Updated 3 months ago
- Official implementation of Spatial-Forcing: Implicit Spatial Representation Alignment for Vision-language-action Model☆139Updated 3 weeks ago
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆42Updated 2 months ago