goxq / MIFAG-codeLinks
Codes of Paper "Learning 2D Invariant Affordance Knowledge for 3D Affordance Grounding"
☆18Updated last year
Alternatives and similar repositories for MIFAG-code
Users that are interested in MIFAG-code are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] SPA: 3D Spatial-Awareness Enables Effective Embodied Representation☆168Updated 4 months ago
- [NeurIPS 2025 Spotlight] SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation☆200Updated 4 months ago
- [CVPR'25] SeeGround: See and Ground for Zero-Shot Open-Vocabulary 3D Visual Grounding☆182Updated 6 months ago
- [ECCV 2024] ManiGaussian: Dynamic Gaussian Splatting for Multi-task Robotic Manipulation☆244Updated 7 months ago
- ☆91Updated last month
- Official implemetation of the paper "InSpire: Vision-Language-Action Models with Intrinsic Spatial Reasoning"☆43Updated last month
- [NeurIPS 2024] Official code repository for MSR3D paper☆68Updated 3 months ago
- [CVPR 2025]Lift3D Foundation Policy: Lifting 2D Large-Scale Pretrained Models for Robust 3D Robotic Manipulation☆166Updated 4 months ago
- The offical repo for paper "VQ-VLA: Improving Vision-Language-Action Models via Scaling Vector-Quantized Action Tokenizers" (ICCV 2025)☆89Updated 2 weeks ago
- CronusVLA: Towards Efficient and Robust Manipulation via Multi-Frame Vision-Language-Action Modeling☆47Updated this week
- [arXiv 2025] MMSI-Bench: A Benchmark for Multi-Image Spatial Intelligence☆55Updated 2 weeks ago
- [CVPR 2025] The code for paper ''Video-3D LLM: Learning Position-Aware Video Representation for 3D Scene Understanding''.☆173Updated 5 months ago
- [NeurIPS 2025] DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆213Updated last month
- The code for paper 'Learning from Videos for 3D World: Enhancing MLLMs with 3D Vision Geometry Priors'☆151Updated last month
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆230Updated last week
- [NeurIPS 2024 D&B] Point Cloud Matters: Rethinking the Impact of Different Observation Spaces on Robot Learning☆87Updated last year
- 😎 up-to-date & curated list of awesome 3D Visual Grounding papers, methods & resources.☆234Updated last week
- [NeurIPS 2025] OST-Bench: Evaluating the Capabilities of MLLMs in Online Spatio-temporal Scene Understanding☆66Updated last month
- ☆39Updated last year
- ☆52Updated last year
- InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to Manipulation☆63Updated last month
- ☆77Updated 2 months ago
- Official implementation of ECCV24 paper "SceneVerse: Scaling 3D Vision-Language Learning for Grounded Scene Understanding"☆268Updated 7 months ago
- List of papers on video-centric robot learning☆22Updated 11 months ago
- Official code for "Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation"☆98Updated 2 months ago
- ☆200Updated 3 months ago
- ICCV2025☆140Updated 2 months ago
- Code&Data for Grounded 3D-LLM with Referent Tokens☆129Updated 10 months ago
- ☆13Updated 6 months ago
- ☆60Updated 7 months ago