joyhsu0504 / LEFTLinks
☆42Updated last year
Alternatives and similar repositories for LEFT
Users that are interested in LEFT are comparing it to the libraries listed below
Sorting:
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆57Updated 8 months ago
- [ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration☆48Updated last month
- ☆17Updated 11 months ago
- Code for Stable Control Representations☆25Updated 2 months ago
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆31Updated 5 months ago
- General-purpose Visual Understanding Evaluation☆20Updated last year
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆38Updated 2 years ago
- Official Code for Neural Systematic Binder☆33Updated 2 years ago
- SNARE Dataset with MATCH and LaGOR models☆24Updated last year
- This repository is the official implementation of Improving Object-centric Learning With Query Optimization☆50Updated 2 years ago
- ☆14Updated 11 months ago
- [CVPR 2022] Joint hand motion and interaction hotspots prediction from egocentric videos☆65Updated last year
- ☆25Updated last year
- Python package for importing and loading external assets into AI2THOR☆21Updated 8 months ago
- Dreamitate: Real-World Visuomotor Policy Learning via Video Generation (CoRL 2024)☆44Updated 11 months ago
- [ICCV 2023] Understanding 3D Object Interaction from a Single Image☆43Updated last year
- ☆76Updated last week
- Code release for NeurIPS 2023 paper SlotDiffusion: Object-centric Learning with Diffusion Models☆87Updated last year
- ☆44Updated last year
- ☆72Updated 9 months ago
- Can 3D Vision-Language Models Truly Understand Natural Language?☆21Updated last year
- Visual Representation Learning with Stochastic Frame Prediction (ICML 2024)☆20Updated 6 months ago
- Personal Python toolbox☆16Updated 10 months ago
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆43Updated last year
- LogiCity@NeurIPS'24, D&B track. A multi-agent inductive learning environment for "abstractions".☆22Updated 6 months ago
- ☆46Updated 5 months ago
- MiniGrid Implementation of BEHAVIOR Tasks☆45Updated 9 months ago
- ☆42Updated 2 years ago
- [ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models …☆61Updated 2 months ago
- Code for the RSS 2023 paper "Energy-based Models are Zero-Shot Planners for Compositional Scene Rearrangement"☆19Updated last year