Yutong-Zhou-cv / Awesome-Survey-PapersLinks
A curated list of Survey Papers on Deep Learning.
☆11Updated 2 years ago
Alternatives and similar repositories for Awesome-Survey-Papers
Users that are interested in Awesome-Survey-Papers are comparing it to the libraries listed below
Sorting:
- Masked Vision-Language Transformer in Fashion☆38Updated 2 years ago
- ☆30Updated 2 years ago
- ☆58Updated last year
- Code for CVPR 2023 paper "SViTT: Temporal Learning of Sparse Video-Text Transformers"☆20Updated 2 years ago
- Official repository for the General Robust Image Task (GRIT) Benchmark☆54Updated 2 years ago
- ECCV2024_Parrot Captions Teach CLIP to Spot Text☆66Updated last year
- Official implementation and dataset for the NAACL 2024 paper "ComCLIP: Training-Free Compositional Image and Text Matching"☆37Updated last year
- [PR 2024] A large Cross-Modal Video Retrieval Dataset with Reading Comprehension☆28Updated 2 years ago
- Code for our ICLR 2024 paper "PerceptionCLIP: Visual Classification by Inferring and Conditioning on Contexts"☆79Updated last year
- A curated list of video-text datasets in a variety of languages. These datasets can be used for video captioning (video description) or v…☆39Updated last year
- Code and Models for "GeneCIS A Benchmark for General Conditional Image Similarity"☆61Updated 2 years ago
- ☆26Updated 2 years ago
- ☆62Updated 2 years ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆42Updated last month
- [CVPR'22] Official PyTorch Implementation of "Collaborative Transformers for Grounded Situation Recognition"☆50Updated 2 years ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆48Updated last year
- Code for the paper titled "CiT Curation in Training for Effective Vision-Language Data".☆78Updated 3 years ago
- Code and data for the paper: Learning Action and Reasoning-Centric Image Editing from Videos and Simulation☆33Updated 6 months ago
- ☆35Updated last year
- Benchmarking Multi-Image Understanding in Vision and Language Models☆12Updated last year
- A curated list of papers and resources for text-to-image evaluation.☆30Updated 2 years ago
- Training code for CLIP-FlanT5☆30Updated last year
- Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆57Updated 2 years ago
- [CVPR 2024] The official implementation of paper "synthesize, diagnose, and optimize: towards fine-grained vision-language understanding"☆50Updated 7 months ago
- [CVPR 2023 (Highlight)] FAME-ViL: Multi-Tasking V+L Model for Heterogeneous Fashion Tasks☆55Updated 2 years ago
- [CVPR 2023] HierVL Learning Hierarchical Video-Language Embeddings☆46Updated 2 years ago
- Official code for "Disentangling Visual Embeddings for Attributes and Objects" Published at CVPR 2022☆35Updated 2 years ago
- Official Code of ECCV 2022 paper MS-CLIP☆91Updated 3 years ago
- FuseCap: Leveraging Large Language Models for Enriched Fused Image Captions☆55Updated last year
- ☆13Updated 8 months ago