goxq / MIFAG-codeLinks
Codes of Paper "Learning 2D Invariant Affordance Knowledge for 3D Affordance Grounding"
☆18Updated last year
Alternatives and similar repositories for MIFAG-code
Users that are interested in MIFAG-code are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2025 Spotlight] SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation☆192Updated 2 months ago
- ☆37Updated last year
- [arXiv 2025] MMSI-Bench: A Benchmark for Multi-Image Spatial Intelligence☆52Updated last month
- [ECCV 2024] ManiGaussian: Dynamic Gaussian Splatting for Multi-task Robotic Manipulation☆239Updated 6 months ago
- [ICLR 2025] SPA: 3D Spatial-Awareness Enables Effective Embodied Representation☆165Updated 3 months ago
- Official implemetation of the paper "InSpire: Vision-Language-Action Models with Intrinsic Spatial Reasoning"☆45Updated 3 weeks ago
- [CVPR 2025]Lift3D Foundation Policy: Lifting 2D Large-Scale Pretrained Models for Robust 3D Robotic Manipulation☆159Updated 3 months ago
- (NeurIPS2025) DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆181Updated last week
- [NeurIPS 2024] Official code repository for MSR3D paper☆64Updated last month
- ☆81Updated last week
- The offical repo for paper "VQ-VLA: Improving Vision-Language-Action Models via Scaling Vector-Quantized Action Tokenizers" (ICCV 2025)☆79Updated last month
- List of papers on video-centric robot learning☆21Updated 10 months ago
- [NeurIPS 2024 D&B] Point Cloud Matters: Rethinking the Impact of Different Observation Spaces on Robot Learning☆86Updated 11 months ago
- One-Shot Open Affordance Learning with Foundation Models (CVPR 2024)☆43Updated last year
- [CVPR 2025] The code for paper ''Video-3D LLM: Learning Position-Aware Video Representation for 3D Scene Understanding''.☆162Updated 3 months ago
- ☆76Updated last month
- ICCV2025☆133Updated last month
- [ICCV2025] AnyBimanual: Transfering Unimanual Policy for General Bimanual Manipulation☆88Updated 3 months ago
- AIR-Embodied: An Efficient Active 3DGS-based Interaction and Reconstruction Framework with Embodied Large Language Model☆19Updated 5 months ago
- InternVLA-M1: A Spatially Grounded Foundation Model for Generalist Robot Policy☆116Updated this week
- Official implementation of the paper: "StreamVLN: Streaming Vision-and-Language Navigation via SlowFast Context Modeling"☆231Updated 3 weeks ago
- [CVPR'25] SeeGround: See and Ground for Zero-Shot Open-Vocabulary 3D Visual Grounding☆172Updated 5 months ago
- Unified Vision-Language-Action Model☆193Updated 2 months ago
- ☆50Updated 11 months ago
- OST-Bench: Evaluating the Capabilities of MLLMs in Online Spatio-temporal Scene Understanding☆59Updated 2 months ago
- ☆35Updated 2 months ago
- [NeurIPS 2025] Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆156Updated last week
- [ECCV 2024] Empowering 3D Visual Grounding with Reasoning Capabilities☆80Updated 11 months ago
- InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to Manipulation☆53Updated last week
- [arXiv 2025] CronusVLA: Transferring Latent Motion Across Time for Multi-Frame Prediction in Manipulation☆38Updated last month