joyhsu0504 / LEFTLinks
☆46Updated last year
Alternatives and similar repositories for LEFT
Users that are interested in LEFT are comparing it to the libraries listed below
Sorting:
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆58Updated last year
- Code for Stable Control Representations☆26Updated 8 months ago
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆42Updated 2 months ago
- [ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration☆57Updated 7 months ago
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆45Updated 2 years ago
- This repository is the official implementation of Improving Object-centric Learning With Query Optimization☆51Updated 2 years ago
- ☆16Updated last year
- ☆87Updated last year
- Python package for importing and loading external assets into AI2THOR☆28Updated last month
- ☆78Updated 6 months ago
- General-purpose Visual Understanding Evaluation☆20Updated last year
- ☆18Updated last year
- [ICCV 2023] Understanding 3D Object Interaction from a Single Image☆47Updated last year
- Personal Python toolbox☆16Updated last year
- HD-EPIC Python script to download the entire datasets or parts of it☆14Updated 2 months ago
- ☆60Updated 11 months ago
- Code release for ICLR 2023 paper: SlotFormer on object-centric dynamics models☆117Updated 2 years ago
- [CVPR 2022] Joint hand motion and interaction hotspots prediction from egocentric videos☆71Updated last year
- Code release for NeurIPS 2023 paper SlotDiffusion: Object-centric Learning with Diffusion Models☆93Updated last year
- ☆86Updated 3 months ago
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆45Updated 2 years ago
- ☆33Updated last year
- Dreamitate: Real-World Visuomotor Policy Learning via Video Generation (CoRL 2024)☆57Updated 6 months ago
- [EMNLP 2023 (Findings)] This repository contains data processing, evaluation, and fine-tuning code for NEWTON: Are Large Language Models …☆40Updated last year
- ☆44Updated last year
- [ICML 2024] A Touch, Vision, and Language Dataset for Multimodal Alignment☆88Updated 6 months ago
- Codebase for HiP☆90Updated last year
- Visual Representation Learning with Stochastic Frame Prediction (ICML 2024)☆23Updated last year
- ☆23Updated last month
- Code for the RSS 2023 paper "Energy-based Models are Zero-Shot Planners for Compositional Scene Rearrangement"☆20Updated 2 years ago