[ECCV2024, Oral, Best Paper Finalist] This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generation via Visual Instruction Tuning".
☆39Feb 24, 2025Updated last year
Alternatives and similar repositories for LEGO
Users that are interested in LEGO are comparing it to the libraries listed below
Sorting:
- [AAAI 2025] Grounded Multi-Hop VideoQA in Long-Form Egocentric Videos☆32May 27, 2025Updated 9 months ago
- Data release for Step Differences in Instructional Video (CVPR24)☆14Jun 19, 2024Updated last year
- [BMVC2022, IJCV2023, Best Student Paper, Spotlight] Official codes for the paper "In the Eye of Transformer: Global-Local Correlation for…☆31Feb 22, 2025Updated last year
- ☆27Jul 20, 2024Updated last year
- Code for paper: "Executing Arithmetic: Fine-Tuning Large Language Models as Turing Machines"☆11Oct 11, 2024Updated last year
- [CVPR 2024] Data and benchmark code for the EgoExoLearn dataset☆80Aug 26, 2025Updated 6 months ago
- EMNLP2023 - InfoSeek: A New VQA Benchmark focus on Visual Info-Seeking Questions☆25May 30, 2024Updated last year
- The official PyTorch implementation of the IEEE/CVF Computer Vision and Pattern Recognition (CVPR) '24 paper PREGO: online mistake detect…☆32Jun 9, 2025Updated 8 months ago
- Official PyTorch Implementation of Masked Temporal Interpolation Diffusion for Procedure Planning in Instructional Videos☆11Feb 10, 2026Updated 2 weeks ago
- Repository of GUI Action Narrator☆12Apr 8, 2025Updated 10 months ago
- Code and data release for the paper "Learning Object State Changes in Videos: An Open-World Perspective" (CVPR 2024)☆35Sep 9, 2024Updated last year
- OVAD: Open-vocabulary Attribute Detection code☆31Aug 28, 2023Updated 2 years ago
- ☆13Feb 12, 2024Updated 2 years ago
- COM Kitchens: An Unedited Overhead-view Video Dataset as a Vision-Language Benchmark☆14Aug 22, 2024Updated last year
- [CVPR 2024 Champions][ICLR 2025] Solutions for EgoVis Chanllenges in CVPR 2024☆133May 11, 2025Updated 9 months ago
- A Comprehensive Benchmark for Robust Multi-image Understanding☆19Sep 4, 2024Updated last year
- Code for "Modeling Multimodal Social Interactions: New Challenges and Baselines with Densely Aligned Representations" (CVPR 2024 Oral)☆18Jun 23, 2024Updated last year
- Code for the paper "AMEGO: Active Memory from long EGOcentric videos" published at ECCV 2024☆43Dec 7, 2024Updated last year
- [ECCV 2024] EgoCVR: An Egocentric Benchmark for Fine-Grained Composed Video Retrieval☆41Apr 11, 2025Updated 10 months ago
- The official implementation of Error Detection in Egocentric Procedural Task Videos☆22Sep 20, 2025Updated 5 months ago
- Code for the paper "ShowHowTo: Generating Scene-Conditioned Step-by-Step Visual Instructions" published at CVPR 2025☆20Mar 16, 2025Updated 11 months ago
- ICCV 2023 (Oral) Open-domain Visual Entity Recognition Towards Recognizing Millions of Wikipedia Entities☆43Jun 7, 2025Updated 8 months ago
- [ICLR 2024 Poster] SCHEMA: State CHangEs MAtter for Procedure Planning in Instructional Videos☆20Aug 21, 2025Updated 6 months ago
- ☆21Mar 18, 2023Updated 2 years ago
- PyTorch Implementation of Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model☆27Oct 10, 2024Updated last year
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆54Mar 9, 2025Updated 11 months ago
- Code for CVPR 2023 paper "Procedure-Aware Pretraining for Instructional Video Understanding"☆50Jan 27, 2025Updated last year
- Action2Sound: Ambient-Aware Generation of Action Sounds from Egocentric Videos☆25Oct 1, 2024Updated last year
- ☆57Updated this week
- ☆40Jun 24, 2025Updated 8 months ago
- Code for the paper "GenHowTo: Learning to Generate Actions and State Transformations from Instructional Videos" published at CVPR 2024☆53Mar 3, 2024Updated last year
- ☆28Nov 10, 2025Updated 3 months ago
- A curated list of egocentric (first-person) vision and related area resources☆310Oct 14, 2024Updated last year
- ☆33Jan 2, 2025Updated last year
- Official pyTorch implementation of Transformer-based PAUP model for sequential recommentation, SIGIR 2022☆10Sep 8, 2022Updated 3 years ago
- (2024CVPR) MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding☆346Jul 19, 2024Updated last year
- Official PyTorch code of GroundVQA (CVPR'24)☆64Sep 13, 2024Updated last year
- [ICRA 2025] LaMOT: Language-Guided Multi-Object Tracking☆29Feb 10, 2025Updated last year
- (NeurIPS 2023) Open-set visual object query search & localization in long-form videos☆26Feb 1, 2024Updated 2 years ago