JasonQSY / AffordanceLLMLinks
Code for "AffordanceLLM: Grounding Affordance from Vision Language Models"
☆14Updated last year
Alternatives and similar repositories for AffordanceLLM
Users that are interested in AffordanceLLM are comparing it to the libraries listed below
Sorting:
- FlowBotHD: History-Aware Diffuser Handling Ambiguities in Articulated Objects Manipulation☆14Updated last year
- Code & data for "RoboGround: Robotic Manipulation with Grounded Vision-Language Priors" (CVPR 2025)☆38Updated 8 months ago
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆41Updated 4 months ago
- [CoRL2023] Official PyTorch implementation of PolarNet: 3D Point Clouds for Language-Guided Robotic Manipulation☆42Updated last year
- ☆15Updated last year
- [IROS 2023] Open-Vocabulary Affordance Detection in 3d Point Clouds☆82Updated last year
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆46Updated 2 years ago
- ☆47Updated 7 months ago
- MAPLE infuses dexterous manipulation priors from egocentric videos into vision encoders, making their features well-suited for downstream…☆29Updated 2 months ago
- KUDA: Keypoints to Unify Dynamics Learning and Visual Prompting for Open-Vocabulary Robotic Manipulation☆21Updated 9 months ago
- (Incomplete version) This is an implementation of affordancellm.☆17Updated last year
- [TASE 2025] Efficient Alignment of Unconditioned Action Prior for Language-conditioned Pick and Place in Clutter☆34Updated 3 months ago
- ☆47Updated 7 months ago
- Implementation of Prompting with the Future: Open-World Model Predictive Control with Interactive Digital Twins. [RSS 2025]☆48Updated 3 months ago
- EMMOE: A Comprehensive Benchmark for Embodied Mobile Manipulation in Open Environments☆25Updated 8 months ago
- IROS 2024 | PreAfford: Universal Affordance-Based Pre-grasping for Diverse Objects and Scenes☆15Updated last year
- One-Shot Open Affordance Learning with Foundation Models (CVPR 2024)☆46Updated last year
- Pi0-VLA Repository of "MotionTrans: Human VR Data Enable Motion-Level Learning for Robotic Manipulation Policies"☆25Updated 4 months ago
- RPMArt: Towards Robust Perception and Manipulation for Articulated Objects☆20Updated 11 months ago
- NSRM: Neuro-Symbolic Robot Manipulation☆18Updated 2 years ago
- Splat-MOVER: Multi-Stage, Open-Vocabulary Robotic Manipulation via Editable Gaussian Splatting☆41Updated last year
- [CoRL 2025] Robot Learning from Any Images☆34Updated 3 months ago
- Code for Ditto in the House: Building Articulation Models of Indoor Scenes through Interactive Perception☆17Updated 2 years ago
- [CVPR-2025] GREAT: Geometry-Intention Collaborative Inference for Open-Vocabulary 3D Object Affordance Grounding☆35Updated 5 months ago
- AnyPos: Automated Task-Agnostic Actions for Bimanual Manipulation☆34Updated 6 months ago
- This is the official implementation of Video Generation part of This&That: Language-Gesture Controlled Video Generation for Robot Plannin…☆48Updated last month
- FieldGen is a semi-automatic data generation framework that enables scalable collection of diverse, high-quality real-world manipulation …☆25Updated 3 months ago
- [CVPR 2025] VidBot: Learning Generalizable 3D Actions from In-the-Wild 2D Human Videos for Zero-Shot Robotic Manipulation☆45Updated 7 months ago
- Granularity-Aware Affordance Understanding from human-object interaction for Dexterous Robotic Functional Grasping☆14Updated 5 months ago
- Dreamitate: Real-World Visuomotor Policy Learning via Video Generation (CoRL 2024)☆58Updated 8 months ago