Annusha / xmic
X-MIC: Cross-Modal Instance Conditioning for Egocentric Action Generalization, CVPR 2024
☆11Updated 3 months ago
Alternatives and similar repositories for xmic:
Users that are interested in xmic are comparing it to the libraries listed below
- Official implementation of "In-style: Bridging Text and Uncurated Videos with Style Transfer for Cross-modal Retrieval". ICCV 2023☆11Updated last year
- [CVPR23 Highlight] CREPE: Can Vision-Language Foundation Models Reason Compositionally?☆32Updated last year
- This repository contains the Adverbs in Recipes (AIR) dataset and the code published at the CVPR 23 paper: "Learning Action Changes by Me…☆13Updated last year
- [ECCV 2024] EgoCVR: An Egocentric Benchmark for Fine-Grained Composed Video Retrieval☆35Updated 6 months ago
- Official implementation of "HowToCaption: Prompting LLMs to Transform Video Annotations at Scale." ECCV 2024☆50Updated 5 months ago
- Code and data for the paper "Emergent Visual-Semantic Hierarchies in Image-Text Representations" (ECCV 2024)☆26Updated 6 months ago
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆24Updated 3 months ago
- ☆26Updated last year
- [AAAI2023] Symbolic Replay: Scene Graph as Prompt for Continual Learning on VQA Task (Oral)☆39Updated 11 months ago
- HT-Step is a large-scale article grounding dataset of temporal step annotations on how-to videos☆17Updated 11 months ago
- Implementation of paper 'Helping Hands: An Object-Aware Ego-Centric Video Recognition Model'☆33Updated last year
- [NeurIPS 2023] Official Pytorch code for LOVM: Language-Only Vision Model Selection☆20Updated last year
- [CVPR 2024] Improving language-visual pretraining efficiency by perform cluster-based masking on images.☆26Updated 9 months ago
- ☆16Updated last year
- Compress conventional Vision-Language Pre-training data☆49Updated last year
- [CVPR 2023] Learning Attention as Disentangler for Compositional Zero-shot Learning☆40Updated last year
- Perceptual Grouping in Contrastive Vision-Language Models (ICCV'23)☆37Updated last year
- Official code for "Disentangling Visual Embeddings for Attributes and Objects" Published at CVPR 2022☆35Updated last year
- Official code for the ICLR2023 paper Compositional Prompt Tuning with Motion Cues for Open-vocabulary Video Relation Detection☆43Updated 9 months ago
- Code and datasets for "Text encoders are performance bottlenecks in contrastive vision-language models". Coming soon!☆11Updated last year
- Official code repo of PIN: Positional Insert Unlocks Object Localisation Abilities in VLMs☆25Updated last month
- Code for Semantics Meets Temporal Correspondence: Self-supervised Object-centric Learning in Videos☆9Updated 6 months ago
- This repo contains the official implementation of ICLR 2024 paper "Is ImageNet worth 1 video? Learning strong image encoders from 1 long …☆84Updated 9 months ago
- [ECCV 2024 Oral] Official implementation of the paper "DEVIAS: Learning Disentangled Video Representations of Action and Scene"☆15Updated 4 months ago
- Code and data setup for the paper "Are Diffusion Models Vision-and-language Reasoners?"☆31Updated 11 months ago
- Repo for paper: "Paxion: Patching Action Knowledge in Video-Language Foundation Models" Neurips 23 Spotlight☆37Updated last year
- Official repository for the ICCV 2023 paper: "Waffling around for Performance: Visual Classification with Random Words and Broad Concepts…☆56Updated last year
- Composed Video Retrieval☆50Updated 10 months ago
- Official PyTorch code of GroundVQA (CVPR'24)☆56Updated 5 months ago
- [CVPR 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆45Updated 7 months ago