BolinLai / LEGO
[ECCV2024, Oral, Best Paper Finalist]This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generation via Visual Instruction Tuning".
☆37Updated 2 months ago
Alternatives and similar repositories for LEGO:
Users that are interested in LEGO are comparing it to the libraries listed below
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆58Updated 7 months ago
- Official code for MotionBench (CVPR 2025)☆36Updated 2 months ago
- [CVPR 2024] Data and benchmark code for the EgoExoLearn dataset☆57Updated 8 months ago
- [ICLR 2024] Seer: Language Instructed Video Prediction with Latent Diffusion Models☆31Updated 11 months ago
- EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆103Updated 5 months ago
- Code release for NeurIPS 2023 paper SlotDiffusion: Object-centric Learning with Diffusion Models☆85Updated last year
- Code implementation for paper titled "HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision"☆26Updated last year
- [ICML 2024] A Touch, Vision, and Language Dataset for Multimodal Alignment☆72Updated 2 months ago
- Code for the paper "GenHowTo: Learning to Generate Actions and State Transformations from Instructional Videos" published at CVPR 2024☆51Updated last year
- Code and data release for the paper "Learning Object State Changes in Videos: An Open-World Perspective" (CVPR 2024)☆32Updated 7 months ago
- [ECCV 2024 Oral] ActionVOS: Actions as Prompts for Video Object Segmentation☆31Updated 5 months ago
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆69Updated 2 months ago
- ☆41Updated last week
- ☆61Updated last year
- FleVRS: Towards Flexible Visual Relationship Segmentation, NeurIPS 2024☆20Updated 4 months ago
- Affordance Grounding from Demonstration Video to Target Image (CVPR 2023)☆44Updated 9 months ago
- Egocentric Video Understanding Dataset (EVUD)☆29Updated 10 months ago
- [CVPR 2025] Official PyTorch Implementation of GLUS: Global-Local Reasoning Unified into A Single Large Language Model for Video Segmenta…☆35Updated 2 weeks ago
- Can 3D Vision-Language Models Truly Understand Natural Language?☆21Updated last year
- Code and data release for the paper "Learning Fine-grained View-Invariant Representations from Unpaired Ego-Exo Videos via Temporal Align…☆17Updated last year
- Action Scene Graphs for Long-Form Understanding of Egocentric Videos (CVPR 2024)☆38Updated 3 weeks ago
- Training-free Guidance in Text-to-Video Generation via Multimodal Planning and Structured Noise Initialization☆18Updated 3 weeks ago
- [arXiv:2309.16669] Code release for "Training a Large Video Model on a Single Machine in a Day"☆128Updated 9 months ago
- Official implementation of the CVPR'24 paper [Adaptive Slot Attention: Object Discovery with Dynamic Slot Number]☆40Updated 3 months ago
- Language Repository for Long Video Understanding☆31Updated 10 months ago
- This repo contains the official implementation of ICLR 2024 paper "Is ImageNet worth 1 video? Learning strong image encoders from 1 long …☆88Updated 11 months ago
- (ECCV 2024) Official repository of paper "EgoExo-Fitness: Towards Egocentric and Exocentric Full-Body Action Understanding"☆28Updated 3 weeks ago
- [ICCV2023] EgoObjects: A Large-Scale Egocentric Dataset for Fine-Grained Object Understanding☆76Updated last year
- [BMVC2022, IJCV2023, Best Student Paper, Spotlight] Official codes for the paper "In the Eye of Transformer: Global-Local Correlation for…☆24Updated 2 months ago
- Vinci: A Real-time Embodied Smart Assistant based on Egocentric Vision-Language Model☆59Updated 3 months ago