goxq / MIFAG-codeLinks
Codes of Paper "Learning 2D Invariant Affordance Knowledge for 3D Affordance Grounding"
☆18Updated 11 months ago
Alternatives and similar repositories for MIFAG-code
Users that are interested in MIFAG-code are comparing it to the libraries listed below
Sorting:
- SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation☆184Updated last month
- [ICLR 2025] SPA: 3D Spatial-Awareness Enables Effective Embodied Representation☆163Updated last month
- [arXiv 2025] MMSI-Bench: A Benchmark for Multi-Image Spatial Intelligence☆47Updated this week
- DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆143Updated last week
- [ECCV 2024] ManiGaussian: Dynamic Gaussian Splatting for Multi-task Robotic Manipulation☆233Updated 4 months ago
- [CVPR 2025]Lift3D Foundation Policy: Lifting 2D Large-Scale Pretrained Models for Robust 3D Robotic Manipulation☆153Updated last month
- ☆35Updated last year
- [NeurIPS 2024] Official code repository for MSR3D paper☆60Updated 2 weeks ago
- List of papers on video-centric robot learning☆21Updated 8 months ago
- ☆50Updated 10 months ago
- [CVPR 2025] The code for paper ''Video-3D LLM: Learning Position-Aware Video Representation for 3D Scene Understanding''.☆143Updated 2 months ago
- Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆123Updated 2 weeks ago
- 😎 up-to-date & curated list of awesome 3D Visual Grounding papers, methods & resources.☆200Updated 3 weeks ago
- ☆65Updated last week
- Code&Data for Grounded 3D-LLM with Referent Tokens☆126Updated 7 months ago
- Official implemetation of the paper "InSpire: Vision-Language-Action Models with Intrinsic Spatial Reasoning"☆37Updated last month
- Official implementation of the paper: "StreamVLN: Streaming Vision-and-Language Navigation via SlowFast Context Modeling"☆176Updated last week
- ☆70Updated this week
- LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintained🔥]☆122Updated last week
- [NeurIPS 2024 D&B] Point Cloud Matters: Rethinking the Impact of Different Observation Spaces on Robot Learning☆83Updated 9 months ago
- Code for "Chat-Scene: Bridging 3D Scene and Large Language Models with Object Identifiers" (NeurIPS 2024)☆181Updated 4 months ago
- Unified Vision-Language-Action Model☆170Updated 3 weeks ago
- Official code for the CVPR 2025 paper "Navigation World Models".☆338Updated 2 weeks ago
- ICCV2025☆112Updated this week
- One-Shot Open Affordance Learning with Foundation Models (CVPR 2024)☆42Updated last year
- [ECCV 2024] Empowering 3D Visual Grounding with Reasoning Capabilities☆78Updated 10 months ago
- ☆11Updated 3 months ago
- ☆123Updated last week
- [RSS 2025] Novel Demonstration Generation with Gaussian Splatting Enables Robust One-Shot Manipulation☆132Updated 2 months ago
- Official implementation of ECCV24 paper "SceneVerse: Scaling 3D Vision-Language Learning for Grounded Scene Understanding"☆252Updated 4 months ago