BolinLai / LEGO
[ECCV2024, Oral, Best Paper Finalist]This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generation via Visual Instruction Tuning".
☆34Updated 3 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for LEGO
- Egocentric Video Understanding Dataset (EVUD)☆24Updated 4 months ago
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆59Updated last month
- Code release for NeurIPS 2023 paper SlotDiffusion: Object-centric Learning with Diffusion Models☆78Updated 10 months ago
- Code and data release for the paper "Learning Object State Changes in Videos: An Open-World Perspective" (CVPR 2024)☆30Updated 2 months ago
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆90Updated 4 months ago
- [arXiv:2309.16669] Code release for "Training a Large Video Model on a Single Machine in a Day"☆118Updated 3 months ago
- This repo contains the official implementation of ICLR 2024 paper "Is ImageNet worth 1 video? Learning strong image encoders from 1 long …☆61Updated 6 months ago
- ☆62Updated 4 months ago
- Ego4D Goal-Step: Toward Hierarchical Understanding of Procedural Activities (NeurIPS 2023)☆37Updated 7 months ago
- [CVPR 2024] Data and benchmark code for the EgoExoLearn dataset☆47Updated 2 months ago
- ☆58Updated last year
- Action Scene Graphs for Long-Form Understanding of Egocentric Videos (CVPR 2024)☆30Updated last month
- [ECCV 2024] Code for Betrayed by Attention: A Simple yet Effective Approach for Self-supervised Video Object Segmentation☆27Updated 4 months ago
- Official implementation of the CVPR'24 paper [Adaptive Slot Attention: Object Discovery with Dynamic Slot Number]☆25Updated this week
- [CVPR 2024 Champions] Solutions for EgoVis Chanllenges in CVPR 2024☆109Updated 4 months ago
- [ECCV 2024 Oral] ActionVOS: Actions as Prompts for Video Object Segmentation☆26Updated last month
- Code for paper "Grounding Video Models to Actions through Goal Conditioned Exploration".☆30Updated last week
- Accepted by CVPR 2024☆28Updated 6 months ago
- EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆66Updated last week
- Affordance Grounding from Demonstration Video to Target Image (CVPR 2023)☆39Updated 3 months ago
- Code and data release for the paper "Learning Fine-grained View-Invariant Representations from Unpaired Ego-Exo Videos via Temporal Align…☆13Updated 7 months ago
- Official implementation of the paper "Boosting Human-Object Interaction Detection with Text-to-Image Diffusion Model"☆49Updated last year
- Language Repository for Long Video Understanding☆29Updated 5 months ago
- Official implementation of "HowToCaption: Prompting LLMs to Transform Video Annotations at Scale." ECCV 2024☆46Updated last month
- [ICCV2023] EgoObjects: A Large-Scale Egocentric Dataset for Fine-Grained Object Understanding☆75Updated last year
- Code and data for the paper "Emergent Visual-Semantic Hierarchies in Image-Text Representations" (ECCV 2024)☆22Updated 3 months ago
- Can 3D Vision-Language Models Truly Understand Natural Language?☆21Updated 7 months ago
- Code for the paper "GenHowTo: Learning to Generate Actions and State Transformations from Instructional Videos" published at CVPR 2024☆44Updated 8 months ago
- Official PyTorch code of "Grounded Question-Answering in Long Egocentric Videos", accepted by CVPR 2024.☆52Updated 2 months ago
- Code release for the paper "Egocentric Video Task Translation" (CVPR 2023 Highlight)☆31Updated last year