daerduoCarey / o2oaffordView external linksLinks
O2O-Afford: Annotation-Free Large-Scale Object-Object Affordance Learning (CoRL 2021)
☆37Feb 22, 2022Updated 3 years ago
Alternatives and similar repositories for o2oafford
Users that are interested in o2oafford are comparing it to the libraries listed below
Sorting:
- Codes of CVPR2022 paper: Fixing Malfunctional Objects With Learned Physical Simulation and Functional Prediction☆32Aug 23, 2022Updated 3 years ago
- Where2Act: From Pixels to Actions for Articulated 3D Objects☆137Aug 17, 2023Updated 2 years ago
- [3DV 2022] Articulated 3D Human-Object Interactions from RGB Videos: An Empirical Analysis of Approaches and Challenges☆17Sep 14, 2022Updated 3 years ago
- [ECCV 2022] S2Contact: Graph-based Network for 3D Hand-Object Contact Estimation with Semi-Supervised Learning☆18Sep 21, 2023Updated 2 years ago
- Official Repository of NeurIPS2021 paper: PTR☆32Dec 17, 2021Updated 4 years ago
- Code for "Learning to Regrasp by Learning to Place"☆21Dec 28, 2023Updated 2 years ago
- ☆25May 19, 2022Updated 3 years ago
- Code for "Learning Affordance Landscapes for Interaction Exploration in 3D Environments" (NeurIPS 20)☆38Jul 6, 2023Updated 2 years ago
- ☆26Sep 25, 2022Updated 3 years ago
- Shape2Motion: Joint Analysis of Motion Parts and Attributes from 3D Shapes☆41Mar 22, 2022Updated 3 years ago
- ☆21Feb 15, 2022Updated 4 years ago
- Hand Mesh Recovery models on OakInk-Image dataset☆12Apr 4, 2024Updated last year
- A small library of 3D related utilities used in my research.☆10Mar 5, 2022Updated 3 years ago
- ☆25Sep 12, 2019Updated 6 years ago
- Research code for CVPR 2021 paper "End-to-End Human Pose and Mesh Reconstruction with Transformers"☆17May 22, 2021Updated 4 years ago
- Release of the YCB-Affordance dataset (CVPR 2020 Oral)☆64Jun 22, 2020Updated 5 years ago
- ☆162Nov 27, 2021Updated 4 years ago
- Normalizing flows in PyTorch☆25Sep 8, 2021Updated 4 years ago
- Code for Ditto: Building Digital Twins of Articulated Objects from Interaction☆125Dec 20, 2024Updated last year
- The official code for our paper StackFLOW: Monocular Human-Object Reconstruction by Stacked Normalizing Flow with Offset in IJCAI 2023.☆13Jul 17, 2024Updated last year
- Extension of Neural Radiance Feilds (Mildenhall et al 2020) to perform 3D style transfer. Implementation in PyTorch Lightning.☆13Oct 18, 2021Updated 4 years ago
- Code of paper "Compositionally Generalizable 3D Structure Prediction"☆33Nov 17, 2022Updated 3 years ago
- A zero-shot segmentation framework for 3D shapes.☆37Sep 5, 2023Updated 2 years ago
- ☆57Dec 8, 2022Updated 3 years ago
- Real-time VIBE: Frame by Frame Inference of VIBE (Video Inference for Human Body Pose and Shape Estimation)☆27Dec 2, 2021Updated 4 years ago
- Code/data of the paper "Hand-Object Contact Prediction via Motion-Based Pseudo-Labeling and Guided Progressive Label Correction" (BMVC202…☆17Oct 22, 2021Updated 4 years ago
- Neural Interaction Fields for Trajectory sYnthesis☆65Jul 17, 2023Updated 2 years ago
- ☆33Mar 21, 2022Updated 3 years ago
- [3dv 2021] Joint fitting of hands and object from short RGB video clips☆103Oct 20, 2021Updated 4 years ago
- ACID: Action-Conditional Implicit Visual Dynamics for Deformable Object Manipulation☆74May 29, 2022Updated 3 years ago
- Scripts for rendering shapenet data in blender☆29Nov 26, 2020Updated 5 years ago
- ☆79Sep 1, 2021Updated 4 years ago
- ☆32Jan 7, 2023Updated 3 years ago
- Code for our ICLR 2022 paper "VAT-Mart: Learning Visual Action Trajectory Proposals for Manipulating 3D ARTiculated Objects"☆52Apr 23, 2023Updated 2 years ago
- ContactGen: Generative Contact Modeling for Grasp Generation (ICCV 2023)☆80Oct 18, 2023Updated 2 years ago
- ☆17Dec 18, 2020Updated 5 years ago
- [SIGGRAPH Asia 2019] RPM-Net: Recurrent Prediction of Motion and Parts from Point Cloud☆26Dec 19, 2023Updated 2 years ago
- A toolkit for visual computing with a focus on geometry processing☆105Nov 15, 2025Updated 3 months ago
- [CVPR 2024] OakInk2 baseline model: Task-aware Motion Fulfillment (TaMF) via Diffusion☆22Dec 2, 2024Updated last year