BolinLai / LEGO
[ECCV2024, Oral, Best Paper Finalist]This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generation via Visual Instruction Tuning".
☆36Updated this week
Alternatives and similar repositories for LEGO:
Users that are interested in LEGO are comparing it to the libraries listed below
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆59Updated 4 months ago
- Code release for NeurIPS 2023 paper SlotDiffusion: Object-centric Learning with Diffusion Models☆82Updated last year
- Official code for MotionBench☆24Updated last month
- [CVPR 2024] Data and benchmark code for the EgoExoLearn dataset☆54Updated 5 months ago
- ☆65Updated last week
- EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆90Updated 3 months ago
- Egocentric Video Understanding Dataset (EVUD)☆26Updated 7 months ago
- Action Scene Graphs for Long-Form Understanding of Egocentric Videos (CVPR 2024)☆35Updated 3 months ago
- Code and data release for the paper "Learning Object State Changes in Videos: An Open-World Perspective" (CVPR 2024)☆32Updated 5 months ago
- Official implementation of the CVPR'24 paper [Adaptive Slot Attention: Object Discovery with Dynamic Slot Number]☆32Updated 3 weeks ago
- Code implementation for paper titled "HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision"☆25Updated 10 months ago
- [ECCV 2024 Oral] ActionVOS: Actions as Prompts for Video Object Segmentation☆31Updated 2 months ago
- The code for paper ''Video-3D LLM: Learning Position-Aware Video Representation for 3D Scene Understanding''.☆46Updated last month
- ☆58Updated last year
- This repo contains the official implementation of ICLR 2024 paper "Is ImageNet worth 1 video? Learning strong image encoders from 1 long …☆80Updated 9 months ago
- Code release for the paper "Egocentric Video Task Translation" (CVPR 2023 Highlight)☆32Updated last year
- [ICCV2023] EgoObjects: A Large-Scale Egocentric Dataset for Fine-Grained Object Understanding☆75Updated last year
- Ego4D Goal-Step: Toward Hierarchical Understanding of Procedural Activities (NeurIPS 2023)☆39Updated 10 months ago
- Code for paper "Grounding Video Models to Actions through Goal Conditioned Exploration".☆41Updated last month
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆94Updated 7 months ago
- Data release for Step Differences in Instructional Video (CVPR24)☆12Updated 8 months ago
- Code for the paper "GenHowTo: Learning to Generate Actions and State Transformations from Instructional Videos" published at CVPR 2024☆50Updated 11 months ago
- Official Pytorch implementation for LARP: Tokenizing Videos with a Learned Autoregressive Generative Prior (ICLR 2025 Oral).☆50Updated last week
- Official implementation of the paper "Boosting Human-Object Interaction Detection with Text-to-Image Diffusion Model"☆56Updated last year
- Diffusion Powers Video Tokenizer for Comprehension and Generation☆64Updated 2 months ago
- FleVRS: Towards Flexible Visual Relationship Segmentation, NeurIPS 2024☆21Updated 2 months ago
- [ICLR 2024] Seer: Language Instructed Video Prediction with Latent Diffusion Models☆24Updated 8 months ago
- Can 3D Vision-Language Models Truly Understand Natural Language?☆21Updated 10 months ago