goxq / MIFAG-codeLinks
Codes of Paper "Learning 2D Invariant Affordance Knowledge for 3D Affordance Grounding"
☆18Updated last year
Alternatives and similar repositories for MIFAG-code
Users that are interested in MIFAG-code are comparing it to the libraries listed below
Sorting:
- SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation☆188Updated 2 months ago
- [arXiv 2025] MMSI-Bench: A Benchmark for Multi-Image Spatial Intelligence☆47Updated 3 weeks ago
- [CVPR 2025]Lift3D Foundation Policy: Lifting 2D Large-Scale Pretrained Models for Robust 3D Robotic Manipulation☆156Updated 2 months ago
- The offical repo for paper "VQ-VLA: Improving Vision-Language-Action Models via Scaling Vector-Quantized Action Tokenizers" (ICCV 2025)☆77Updated 3 weeks ago
- Official implemetation of the paper "InSpire: Vision-Language-Action Models with Intrinsic Spatial Reasoning"☆43Updated last week
- [NeurIPS 2024] Official code repository for MSR3D paper☆62Updated last month
- [ICLR 2025] SPA: 3D Spatial-Awareness Enables Effective Embodied Representation☆164Updated 2 months ago
- ☆72Updated 3 weeks ago
- ☆35Updated last year
- DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆167Updated last week
- [CVPR 2025] The code for paper ''Video-3D LLM: Learning Position-Aware Video Representation for 3D Scene Understanding''.☆154Updated 3 months ago
- [NeurIPS 2024 D&B] Point Cloud Matters: Rethinking the Impact of Different Observation Spaces on Robot Learning☆86Updated 10 months ago
- List of papers on video-centric robot learning☆21Updated 9 months ago
- [ECCV 2024] ManiGaussian: Dynamic Gaussian Splatting for Multi-task Robotic Manipulation☆233Updated 5 months ago
- A curated list of large VLM-based VLA models for robotic manipulation.☆102Updated this week
- Code for "Chat-Scene: Bridging 3D Scene and Large Language Models with Object Identifiers" (NeurIPS 2024)☆193Updated 5 months ago
- Official implementation of the paper: "StreamVLN: Streaming Vision-and-Language Navigation via SlowFast Context Modeling"☆207Updated this week
- 😎 up-to-date & curated list of awesome 3D Visual Grounding papers, methods & resources.☆210Updated this week
- OST-Bench: Evaluating the Capabilities of MLLMs in Online Spatio-temporal Scene Understanding☆59Updated last month
- Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆138Updated last month
- ICCV2025☆114Updated last week
- [CVPR'25] SeeGround: See and Ground for Zero-Shot Open-Vocabulary 3D Visual Grounding☆169Updated 4 months ago
- ☆49Updated 11 months ago
- Code&Data for Grounded 3D-LLM with Referent Tokens☆125Updated 7 months ago
- Unified Vision-Language-Action Model☆185Updated last month
- One-Shot Open Affordance Learning with Foundation Models (CVPR 2024)☆42Updated last year
- [ECCV 2024] Empowering 3D Visual Grounding with Reasoning Capabilities☆79Updated 10 months ago
- ☆72Updated 2 weeks ago
- [ICCV2025] AnyBimanual: Transfering Unimanual Policy for General Bimanual Manipulation☆87Updated 2 months ago
- ☆134Updated 2 years ago