hq-King / Affordance-R1Links
code for affordance-r1
☆54Updated last month
Alternatives and similar repositories for Affordance-R1
Users that are interested in Affordance-R1 are comparing it to the libraries listed below
Sorting:
- [ICCV 2025] RAGNet: Large-scale Reasoning-based Affordance Segmentation Benchmark towards General Grasping☆33Updated 2 months ago
- [CVPR-2025] GREAT: Geometry-Intention Collaborative Inference for Open-Vocabulary 3D Object Affordance Grounding☆35Updated 5 months ago
- [ICLR 2025] SPA: 3D Spatial-Awareness Enables Effective Embodied Representation☆172Updated 7 months ago
- [NeurIPS 2024] MSR3D: Advanced Situated Reasoning in 3D Scenes☆70Updated 2 months ago
- 3DAffordSplat: Efficient Affordance Reasoning with 3D Gaussians (ACM MM 25)☆71Updated 6 months ago
- ☆56Updated last year
- Code implementation of the paper 'FIction: 4D Future Interaction Prediction from Video'☆17Updated 10 months ago
- [ECCV 2024] Empowering 3D Visual Grounding with Reasoning Capabilities☆80Updated last year
- Official implementation of the paper "Unifying 3D Vision-Language Understanding via Promptable Queries"☆84Updated last year
- Official Code for the NeurIPS'23 paper "3D-Aware Visual Question Answering about Parts, Poses and Occlusions"☆19Updated last year
- ☆47Updated 7 months ago
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆46Updated 2 years ago
- [CVPR 2024] Situational Awareness Matters in 3D Vision Language Reasoning☆43Updated last year
- Unifying 2D and 3D Vision-Language Understanding☆121Updated 6 months ago
- Code&Data for Grounded 3D-LLM with Referent Tokens☆132Updated last year
- Official PyTorch implementation for ICML 2025 paper: UP-VLA.☆55Updated 3 weeks ago
- VLA-RFT: Vision-Language-Action Models with Reinforcement Fine-Tuning☆124Updated 4 months ago
- ☆44Updated last year
- EgoDex: Learning Dexterous Manipulation from Large-Scale Egocentric Video☆132Updated 5 months ago
- Omni6D: Large-Vocabulary 3D Object Dataset for Category-Level 6D Object Pose Estimation☆48Updated last year
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆41Updated 4 months ago
- [ICLR 2026] OmniSpatial: Towards Comprehensive Spatial Reasoning Benchmark for Vision Language Models☆79Updated 3 weeks ago
- One-Shot Open Affordance Learning with Foundation Models (CVPR 2024)☆46Updated last year
- OpenScan: A Benchmark for Generalized Open-Vocabulary 3D Scene Understanding☆19Updated 2 months ago
- Official code for paper: N3D-VLM: Native 3D Grounding Enables Accurate Spatial Reasoning in Vision-Language Models☆85Updated 3 weeks ago
- [CVPR 2025] Tra-MoE: Learning Trajectory Prediction Model from Multiple Domains for Adaptive Policy Conditioning☆55Updated 10 months ago
- IKEA Manuals at Work: 4D Grounding of Assembly Instructions on Internet Videos☆56Updated 10 months ago
- [Nips 2025] EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆127Updated 6 months ago
- [CVPR 2025]Lift3D Foundation Policy: Lifting 2D Large-Scale Pretrained Models for Robust 3D Robotic Manipulation☆176Updated 7 months ago
- CVPR2025 | TASTE-Rob: Advancing Video Generation of Task-Oriented Hand-Object Interaction for Generalizable Robotic Manipulation☆33Updated last week