Annusha / xmic
X-MIC: Cross-Modal Instance Conditioning for Egocentric Action Generalization, CVPR 2024
☆11Updated 2 months ago
Alternatives and similar repositories for xmic:
Users that are interested in xmic are comparing it to the libraries listed below
- Official implementation of "In-style: Bridging Text and Uncurated Videos with Style Transfer for Cross-modal Retrieval". ICCV 2023☆11Updated last year
- ☆25Updated last year
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆24Updated 2 months ago
- Official implementation of "HowToCaption: Prompting LLMs to Transform Video Annotations at Scale." ECCV 2024☆50Updated 3 months ago
- [ECCV 2024] EgoCVR: An Egocentric Benchmark for Fine-Grained Composed Video Retrieval☆33Updated 5 months ago
- [CVPR23 Highlight] CREPE: Can Vision-Language Foundation Models Reason Compositionally?☆32Updated last year
- This repository contains the Adverbs in Recipes (AIR) dataset and the code published at the CVPR 23 paper: "Learning Action Changes by Me…☆13Updated last year
- [NeurIPS 2023] Official Pytorch code for LOVM: Language-Only Vision Model Selection☆20Updated 11 months ago
- ☆11Updated 6 months ago
- Code and datasets for "Text encoders are performance bottlenecks in contrastive vision-language models". Coming soon!☆11Updated last year
- [CVPR 2023] Learning Attention as Disentangler for Compositional Zero-shot Learning☆40Updated last year
- Official This-Is-My Dataset published in CVPR 2023☆15Updated 6 months ago
- Implementation of paper 'Helping Hands: An Object-Aware Ego-Centric Video Recognition Model'☆32Updated last year
- Perceptual Grouping in Contrastive Vision-Language Models (ICCV'23)☆37Updated last year
- [ECCV 2024 Oral] Official implementation of the paper "DEVIAS: Learning Disentangled Video Representations of Action and Scene"☆15Updated 3 months ago
- Code and data for the paper "Emergent Visual-Semantic Hierarchies in Image-Text Representations" (ECCV 2024)☆25Updated 5 months ago
- ☆15Updated last month
- ICCV2023: Disentangling Spatial and Temporal Learning for Efficient Image-to-Video Transfer Learning☆39Updated last year
- HT-Step is a large-scale article grounding dataset of temporal step annotations on how-to videos☆17Updated 10 months ago
- Compress conventional Vision-Language Pre-training data☆49Updated last year
- ☆26Updated this week
- Official PyTorch implementation of the paper "CoVR: Learning Composed Video Retrieval from Web Video Captions".☆98Updated 2 weeks ago
- Repo for paper: "Paxion: Patching Action Knowledge in Video-Language Foundation Models" Neurips 23 Spotlight☆37Updated last year
- ☆16Updated last year
- Code for Semantics Meets Temporal Correspondence: Self-supervised Object-centric Learning in Videos☆9Updated 5 months ago
- Official implementation of "Test-Time Zero-Shot Temporal Action Localization", CVPR 2024☆49Updated 4 months ago
- Distribution-Aware Prompt Tuning for Vision-Language Models (ICCV 2023)☆38Updated last year
- [AAAI2023] Symbolic Replay: Scene Graph as Prompt for Continual Learning on VQA Task (Oral)☆39Updated 10 months ago
- Official code for "Disentangling Visual Embeddings for Attributes and Objects" Published at CVPR 2022☆35Updated last year
- Composed Video Retrieval☆49Updated 8 months ago