Yutong-Zhou-cv / Awesome-Survey-PapersLinks
A curated list of Survey Papers on Deep Learning.
☆12Updated 2 years ago
Alternatives and similar repositories for Awesome-Survey-Papers
Users that are interested in Awesome-Survey-Papers are comparing it to the libraries listed below
Sorting:
- Masked Vision-Language Transformer in Fashion☆38Updated 2 years ago
- [WACV2025 Oral] DeepMIM: Deep Supervision for Masked Image Modeling☆55Updated 7 months ago
- ☆30Updated 2 years ago
- [CVPR'22] Official PyTorch Implementation of "Collaborative Transformers for Grounded Situation Recognition"☆50Updated 2 years ago
- [PR 2024] A large Cross-Modal Video Retrieval Dataset with Reading Comprehension☆28Updated last year
- Code for our ICLR 2024 paper "PerceptionCLIP: Visual Classification by Inferring and Conditioning on Contexts"☆79Updated last year
- Code for CVPR 2023 paper "SViTT: Temporal Learning of Sparse Video-Text Transformers"☆20Updated 2 years ago
- Code and Models for "GeneCIS A Benchmark for General Conditional Image Similarity"☆61Updated 2 years ago
- Test-Time Training on Video Streams☆65Updated 2 years ago
- ☆58Updated last year
- ☆20Updated 7 months ago
- ☆26Updated 2 years ago
- Official repository for the General Robust Image Task (GRIT) Benchmark☆54Updated 2 years ago
- [ICLR 23] Contrastive Aligned of Vision to Language Through Parameter-Efficient Transfer Learning☆40Updated 2 years ago
- ☆43Updated 2 years ago
- Official implementation and dataset for the NAACL 2024 paper "ComCLIP: Training-Free Compositional Image and Text Matching"☆37Updated last year
- RO-ViT CVPR 2023 "Region-Aware Pretraining for Open-Vocabulary Object Detection with Vision Transformers"☆18Updated 2 years ago
- Detectron2 Toolbox and Benchmark for V3Det☆18Updated last year
- [CVPR 2023 (Highlight)] FAME-ViL: Multi-Tasking V+L Model for Heterogeneous Fashion Tasks☆55Updated 2 years ago
- Official code for "Disentangling Visual Embeddings for Attributes and Objects" Published at CVPR 2022☆35Updated 2 years ago
- Code for the paper titled "CiT Curation in Training for Effective Vision-Language Data".☆78Updated 2 years ago
- ☆13Updated last year
- ECCV2022,Bootstrapped Masked Autoencoders for Vision BERT Pretraining☆97Updated 3 years ago
- ECCV2024_Parrot Captions Teach CLIP to Spot Text☆66Updated last year
- ☆26Updated 2 years ago
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆101Updated last year
- [CVPR 2023] HierVL Learning Hierarchical Video-Language Embeddings☆46Updated 2 years ago
- Official Code of ECCV 2022 paper MS-CLIP☆91Updated 3 years ago
- [CVPR 2023] Zero-shot Generative Model Adaptation via Image-specific Prompt Learning☆83Updated 2 years ago
- [CVPR 2024] The official implementation of paper "synthesize, diagnose, and optimize: towards fine-grained vision-language understanding"☆49Updated 6 months ago