Code and data release for the paper "Learning Object State Changes in Videos: An Open-World Perspective" (CVPR 2024)
☆35Sep 9, 2024Updated last year
Alternatives and similar repositories for VidOSC
Users that are interested in VidOSC are comparing it to the libraries listed below
Sorting:
- Code and data release for the paper "Learning Fine-grained View-Invariant Representations from Unpaired Ego-Exo Videos via Temporal Align…☆19Apr 5, 2024Updated last year
- 🔥🔥🔥 Object State Description & Change Detection☆10Mar 30, 2024Updated last year
- ECCV 2024 STMA & CVPR 2024 1st MOSE & 1st VOT Challenge & 1st LSVOS v6☆12Oct 16, 2024Updated last year
- Progress-Aware Online Action Segmentation for Egocentric Procedural Task Videos☆29Sep 9, 2024Updated last year
- Official PyTorch code of GroundVQA (CVPR'24)☆64Sep 13, 2024Updated last year
- ChangeIt dataset with more than 2600 hours of video with state-changing actions published at CVPR 2022☆11Mar 23, 2022Updated 3 years ago
- Agentic Keyframe Search for Video Question Answering☆16Apr 7, 2025Updated 10 months ago
- [ICCV 2023] How Much Temporal Long-Term Context is Needed for Action Segmentation?☆50Jun 21, 2024Updated last year
- [CVPR 2024] KEPP: Why Not Use Your Textbook? Knowledge-Enhanced Procedure Planning of Instructional Videos☆12Sep 24, 2024Updated last year
- Official Repo for CVPR 2024 Paper "FACT: Frame-Action Cross-Attention Temporal Modeling for Efficient Fully-Supervised Action Segmentatio…☆86Jan 23, 2026Updated last month
- ☆31Mar 5, 2025Updated 11 months ago
- Code release for the paper "Progress-Aware Video Frame Captioning" (CVPR 2025)☆21Jul 16, 2025Updated 7 months ago
- (NeurIPS 2023) Open-set visual object query search & localization in long-form videos☆26Feb 1, 2024Updated 2 years ago
- Code release for the paper "Egocentric Video Task Translation" (CVPR 2023 Highlight)☆34Jun 12, 2023Updated 2 years ago
- Data release for Step Differences in Instructional Video (CVPR24)☆14Jun 19, 2024Updated last year
- [ICCV2025] All in One: Visual-Description-Guided Unified Point Cloud Segmentation☆28Jul 25, 2025Updated 7 months ago
- ☆21Aug 19, 2024Updated last year
- ☆16Apr 4, 2025Updated 10 months ago
- ☆16Oct 4, 2024Updated last year
- TEMPURA enables video-language models to reason about causal event relationships and generate fine-grained, timestamped descriptions of u…☆25Jun 4, 2025Updated 8 months ago
- Code for the paper "Differentiable Task Graph Learning: Procedural Activity Representation and Online Mistake Detection from Egocentric V…☆20Jan 9, 2025Updated last year
- [ICCV2023] EgoObjects: A Large-Scale Egocentric Dataset for Fine-Grained Object Understanding☆79Oct 6, 2023Updated 2 years ago
- Code for the paper "ShowHowTo: Generating Scene-Conditioned Step-by-Step Visual Instructions" published at CVPR 2025☆20Mar 16, 2025Updated 11 months ago
- [CVPR 2025] Official PyTorch code of "Enhancing Video-LLM Reasoning via Agent-of-Thoughts Distillation".☆54May 25, 2025Updated 9 months ago
- ☆44Jan 13, 2026Updated last month
- [ECCV2024, Oral, Best Paper Finalist] This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generation…☆39Feb 24, 2025Updated last year
- CLEVR3D Dataset: Comprehensive Visual Question Answering on Point Clouds through Compositional Scene Manipulation☆20Feb 2, 2024Updated 2 years ago
- [CVPR 2024 Champions][ICLR 2025] Solutions for EgoVis Chanllenges in CVPR 2024☆133May 11, 2025Updated 9 months ago
- Taming Self-Training for Open-Vocabulary Object Detection, CVPR 2024☆21Dec 30, 2023Updated 2 years ago
- [ICLR 2024 Poster] SCHEMA: State CHangEs MAtter for Procedure Planning in Instructional Videos☆20Aug 21, 2025Updated 6 months ago
- This is the official implementation of RGNet: A Unified Retrieval and Grounding Network for Long Videos☆19Mar 3, 2025Updated last year
- [CVPR 2023] HierVL Learning Hierarchical Video-Language Embeddings☆46Aug 14, 2023Updated 2 years ago
- Code for CVPR 2023 paper "Procedure-Aware Pretraining for Instructional Video Understanding"☆50Jan 27, 2025Updated last year
- ☆25Jul 10, 2023Updated 2 years ago
- Code for the paper "GenHowTo: Learning to Generate Actions and State Transformations from Instructional Videos" published at CVPR 2024☆53Mar 3, 2024Updated last year
- ☆22Oct 21, 2024Updated last year
- Official implementation of "A Backpack Full of Skills: Egocentric Video Understanding with Diverse Task Perspectives", accepted at CVPR 2…☆24Jun 13, 2024Updated last year
- ☆23Sep 5, 2023Updated 2 years ago
- Code implementation for paper titled "HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision"☆29Apr 16, 2024Updated last year