Max-Fu / tvl
☆67Updated last month
Alternatives and similar repositories for tvl:
Users that are interested in tvl are comparing it to the libraries listed below
- [CVPR 2024] Binding Touch to Everything: Learning Unified Multimodal Tactile Representations☆44Updated last month
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆58Updated 5 months ago
- ☆46Updated 3 months ago
- Code for paper "Grounding Video Models to Actions through Goal Conditioned Exploration".☆43Updated 2 months ago
- ☆67Updated 6 months ago
- Latent Motion Token as the Bridging Language for Robot Manipulation☆77Updated this week
- [ICLR 2024] Seer: Language Instructed Video Prediction with Latent Diffusion Models☆29Updated 10 months ago
- Code for Stable Control Representations☆24Updated 2 months ago
- [ECCV2024, Oral, Best Paper Finalist]This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generation …☆37Updated last month
- ☆94Updated 7 months ago
- ☆75Updated 7 months ago
- ☆17Updated 8 months ago
- Official repository for "iVideoGPT: Interactive VideoGPTs are Scalable World Models" (NeurIPS 2024), https://arxiv.org/abs/2405.15223☆121Updated 3 weeks ago
- ☆69Updated 3 months ago
- ☆16Updated 4 months ago
- [ICCV 2023] Understanding 3D Object Interaction from a Single Image☆42Updated last year
- Code implementation for paper titled "HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision"☆25Updated 11 months ago
- Official implementation of "Self-Improving Video Generation"☆62Updated 3 weeks ago
- EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆96Updated 4 months ago
- Dreamitate: Real-World Visuomotor Policy Learning via Video Generation (CoRL 2024)☆44Updated 8 months ago
- ☆42Updated 10 months ago
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆51Updated last month
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆48Updated 3 months ago
- ☆121Updated 2 months ago
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆26Updated 3 months ago
- [ECCV 2024] Empowering 3D Visual Grounding with Reasoning Capabilities☆65Updated 5 months ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆94Updated last month