ninatu / in_style
Official implementation of "In-style: Bridging Text and Uncurated Videos with Style Transfer for Cross-modal Retrieval." ICCV 2023
☆11Updated last year
Alternatives and similar repositories for in_style:
Users that are interested in in_style are comparing it to the libraries listed below
- X-MIC: Cross-Modal Instance Conditioning for Egocentric Action Generalization, CVPR 2024☆11Updated 5 months ago
- ☆26Updated last year
- This repository contains the Adverbs in Recipes (AIR) dataset and the code published at the CVPR 23 paper: "Learning Action Changes by Me…☆13Updated last year
- Implementation of paper 'Helping Hands: An Object-Aware Ego-Centric Video Recognition Model'☆33Updated last year
- ☆16Updated 4 months ago
- Official code for "Disentangling Visual Embeddings for Attributes and Objects" Published at CVPR 2022☆35Updated last year
- Prompt Generation Networks for Input-Space Adaptation of Frozen Vision Transformers. Jochem Loedeman, Maarten C. Stol, Tengda Han, Yuki M…☆42Updated 7 months ago
- [CVPR 2023] Learning Attention as Disentangler for Compositional Zero-shot Learning☆39Updated last year
- official PyTorch implementation of paper "Adversarial Bipartite Graph Learning for Video Domain Adaptation" (MM2020 Oral)☆10Updated 2 years ago
- [NeurIPS 2023] Official Pytorch code for LOVM: Language-Only Vision Model Selection☆21Updated last year
- On-Device Domain Generalization☆43Updated 2 years ago
- Code for Static and Dynamic Concepts for Self-supervised Video Representation Learning.☆10Updated 2 years ago
- ☆56Updated this week
- (NeurIPS 2019) Combinatorial Inference against Label Noise☆11Updated 10 months ago
- Official Pytorch implementation of 'Facing the Elephant in the Room: Visual Prompt Tuning or Full Finetuning'? (ICLR2024)☆14Updated last year
- Revisiting Test Time Adaptation Under Online Evaluation☆32Updated last year
- Official code for the ICLR2023 paper Compositional Prompt Tuning with Motion Cues for Open-vocabulary Video Relation Detection☆43Updated 11 months ago
- ☆12Updated last year
- Official code for the paper, "TaCA: Upgrading Your Visual Foundation Model with Task-agnostic Compatible Adapter".☆16Updated last year
- [CVPR23 Highlight] CREPE: Can Vision-Language Foundation Models Reason Compositionally?☆32Updated 2 years ago
- ☆47Updated 2 years ago
- [ICLR 2024 Spotlight] Bounding Box Stability against Feature Dropout Reflects Detector Generalization across Environments☆21Updated 6 months ago
- [AAAI2023] Symbolic Replay: Scene Graph as Prompt for Continual Learning on VQA Task (Oral)☆39Updated last year
- Compress conventional Vision-Language Pre-training data☆49Updated last year
- ICCV2023: Disentangling Spatial and Temporal Learning for Efficient Image-to-Video Transfer Learning☆41Updated last year
- CVPR2022:Learning from Untrimmed Videos: Self-Supervised Video Representation Learning with Hierarchical Consistency☆17Updated 2 years ago
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆24Updated 5 months ago
- Perceptual Grouping in Contrastive Vision-Language Models (ICCV'23)☆36Updated last year
- Repo of NeurIPS23☆15Updated last year
- [ICCV 23]This is a Pytorch implementation of our paper "SMMix: Self-Motivated Image Mixing for Vision Transformers"☆16Updated last year