Annusha / xmic
X-MIC: Cross-Modal Instance Conditioning for Egocentric Action Generalization, CVPR 2024
☆11Updated 5 months ago
Alternatives and similar repositories for xmic:
Users that are interested in xmic are comparing it to the libraries listed below
- Official implementation of "In-style: Bridging Text and Uncurated Videos with Style Transfer for Cross-modal Retrieval." ICCV 2023☆11Updated last year
- ☆26Updated last year
- This repository contains the Adverbs in Recipes (AIR) dataset and the code published at the CVPR 23 paper: "Learning Action Changes by Me…☆13Updated last year
- [CVPR23 Highlight] CREPE: Can Vision-Language Foundation Models Reason Compositionally?☆32Updated last year
- [ECCV 2024] EgoCVR: An Egocentric Benchmark for Fine-Grained Composed Video Retrieval☆35Updated this week
- HT-Step is a large-scale article grounding dataset of temporal step annotations on how-to videos☆17Updated last year
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆24Updated 4 months ago
- Code and data for the paper "Emergent Visual-Semantic Hierarchies in Image-Text Representations" (ECCV 2024)☆26Updated 8 months ago
- Official This-Is-My Dataset published in CVPR 2023☆16Updated 8 months ago
- Official implementation of "HowToCaption: Prompting LLMs to Transform Video Annotations at Scale." ECCV 2024☆52Updated 6 months ago
- Official code for "Disentangling Visual Embeddings for Attributes and Objects" Published at CVPR 2022☆35Updated last year
- Perceptual Grouping in Contrastive Vision-Language Models (ICCV'23)☆37Updated last year
- (NeurIPS 2024 Spotlight) TOPA: Extend Large Language Models for Video Understanding via Text-Only Pre-Alignment☆26Updated 6 months ago
- Official implementation of "Test-Time Zero-Shot Temporal Action Localization", CVPR 2024☆53Updated 7 months ago
- Compress conventional Vision-Language Pre-training data☆49Updated last year
- Code for Semantics Meets Temporal Correspondence: Self-supervised Object-centric Learning in Videos☆9Updated 7 months ago
- Official code for the ICLR2023 paper Compositional Prompt Tuning with Motion Cues for Open-vocabulary Video Relation Detection☆43Updated 10 months ago
- Repository for the paper: Teaching VLMs to Localize Specific Objects from In-context Examples☆23Updated 4 months ago
- Official code repo of PIN: Positional Insert Unlocks Object Localisation Abilities in VLMs☆26Updated 3 months ago
- [AAAI2023] Symbolic Replay: Scene Graph as Prompt for Continual Learning on VQA Task (Oral)☆39Updated last year
- [CVPR 2023] Learning Attention as Disentangler for Compositional Zero-shot Learning☆39Updated last year
- ☆15Updated last year
- [ Arxiv 2023 ] This repository contains the code for "MUPPET: Multi-Modal Few-Shot Temporal Action Detection"☆15Updated last year
- Official PyTorch code of GroundVQA (CVPR'24)☆59Updated 7 months ago
- Implementation of paper 'Helping Hands: An Object-Aware Ego-Centric Video Recognition Model'☆33Updated last year
- Official Pytorch implementation of 'Facing the Elephant in the Room: Visual Prompt Tuning or Full Finetuning'? (ICLR2024)☆12Updated last year
- ICCV2023: Disentangling Spatial and Temporal Learning for Efficient Image-to-Video Transfer Learning☆41Updated last year
- Pytorch implementation for Egoinstructor at CVPR 2024☆18Updated 4 months ago
- ☆61Updated last year
- Code for Static and Dynamic Concepts for Self-supervised Video Representation Learning.☆10Updated 2 years ago