Annusha / xmic
X-MIC: Cross-Modal Instance Conditioning for Egocentric Action Generalization, CVPR 2024
☆11Updated 2 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for xmic
- This repository contains the Adverbs in Recipes (AIR) dataset and the code published at the CVPR 23 paper: "Learning Action Changes by Me…☆13Updated last year
- Official implementation of "In-style: Bridging Text and Uncurated Videos with Style Transfer for Cross-modal Retrieval". ICCV 2023☆11Updated last year
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆22Updated 5 months ago
- Official implementation of "HowToCaption: Prompting LLMs to Transform Video Annotations at Scale." ECCV 2024☆46Updated last month
- ☆25Updated last year
- [CVPR23 Highlight] CREPE: Can Vision-Language Foundation Models Reason Compositionally?☆32Updated last year
- HT-Step is a large-scale article grounding dataset of temporal step annotations on how-to videos☆16Updated 8 months ago
- Compress conventional Vision-Language Pre-training data☆49Updated last year
- Official implementation of "Test-Time Zero-Shot Temporal Action Localization", CVPR 2024☆43Updated 2 months ago
- [CVPR 2023] Learning Attention as Disentangler for Compositional Zero-shot Learning☆40Updated last year
- [CVPR 2024] Do you remember? Dense Video Captioning with Cross-Modal Memory Retrieval☆45Updated 5 months ago
- Perceptual Grouping in Contrastive Vision-Language Models (ICCV'23)☆37Updated 10 months ago
- Prompt Generation Networks for Input-Space Adaptation of Frozen Vision Transformers. Jochem Loedeman, Maarten C. Stol, Tengda Han, Yuki M…☆39Updated 2 months ago
- [ECCV 2024] EgoCVR: An Egocentric Benchmark for Fine-Grained Composed Video Retrieval☆28Updated 2 months ago
- Implementation of paper 'Helping Hands: An Object-Aware Ego-Centric Video Recognition Model'☆31Updated last year
- [ECCV 2024 Oral] ActionVOS: Actions as Prompts for Video Object Segmentation☆25Updated last month
- Official code for "Disentangling Visual Embeddings for Attributes and Objects" Published at CVPR 2022☆33Updated last year
- [NeurIPS 2023] Official Pytorch code for LOVM: Language-Only Vision Model Selection☆20Updated 9 months ago
- ICCV2023: Disentangling Spatial and Temporal Learning for Efficient Image-to-Video Transfer Learning☆39Updated last year
- Official implementation of CVPR 2024 paper "vid-TLDR: Training Free Token merging for Light-weight Video Transformer".☆37Updated 6 months ago
- [ICLR 2024] Test-Time Adaptation with CLIP Reward for Zero-Shot Generalization in Vision-Language Models.☆56Updated 3 months ago
- ☆16Updated 11 months ago
- Composed Video Retrieval☆46Updated 6 months ago
- ☆61Updated last year
- Learning to Count without Annotations☆20Updated 5 months ago
- Official This-Is-My Dataset published in CVPR 2023☆15Updated 4 months ago
- The official repository for ICLR2024 paper "FROSTER: Frozen CLIP is a Strong Teacher for Open-Vocabulary Action Recognition"☆61Updated 7 months ago
- Code and data setup for the paper "Are Diffusion Models Vision-and-language Reasoners?"☆31Updated 8 months ago
- [AAAI2023] Symbolic Replay: Scene Graph as Prompt for Continual Learning on VQA Task (Oral)☆38Updated 7 months ago
- Code and datasets for "Text encoders are performance bottlenecks in contrastive vision-language models". Coming soon!☆11Updated last year