Yutong-Zhou-cv / Awesome-Survey-PapersLinks
A curated list of Survey Papers on Deep Learning.
☆12Updated last year
Alternatives and similar repositories for Awesome-Survey-Papers
Users that are interested in Awesome-Survey-Papers are comparing it to the libraries listed below
Sorting:
- [PR 2024] A large Cross-Modal Video Retrieval Dataset with Reading Comprehension☆26Updated last year
- Masked Vision-Language Transformer in Fashion☆33Updated last year
- ☆29Updated 2 years ago
- [CVPR'22] Official PyTorch Implementation of "Collaborative Transformers for Grounded Situation Recognition"☆49Updated 2 years ago
- Implementation of the model: "(MC-ViT)" from the paper: "Memory Consolidation Enables Long-Context Video Understanding"☆20Updated 2 months ago
- [CVPR-2023] The official dataset of Advancing Visual Grounding with Scene Knowledge: Benchmark and Method.☆30Updated last year
- Official implementation and dataset for the NAACL 2024 paper "ComCLIP: Training-Free Compositional Image and Text Matching"☆34Updated 9 months ago
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆35Updated 11 months ago
- ☆57Updated last year
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆24Updated 6 months ago
- ☆24Updated last year
- ☆34Updated last year
- [ICME 2023] FlowText: Synthesizing Realistic Scene Text Video with Optical Flow Estimation☆11Updated 2 years ago
- Visual Instruction-guided Explainable Metric. Code for "Towards Explainable Metrics for Conditional Image Synthesis Evaluation" (ACL 2024…☆44Updated 6 months ago
- Code and data for the paper: Learning Action and Reasoning-Centric Image Editing from Videos and Simulation☆28Updated 4 months ago
- ☆50Updated 4 months ago
- RO-ViT CVPR 2023 "Region-Aware Pretraining for Open-Vocabulary Object Detection with Vision Transformers"☆18Updated last year
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆30Updated 2 months ago
- Code for the paper titled "CiT Curation in Training for Effective Vision-Language Data".☆78Updated 2 years ago
- ECCV2024_Parrot Captions Teach CLIP to Spot Text☆66Updated 9 months ago
- Code for paper: Unified Text-to-Image Generation and Retrieval☆15Updated 11 months ago
- [CVPR 2022] OCSampler: Compressing Videos to One Clip with Single-step Sampling☆17Updated 2 years ago
- [ECCV 2024] This is the official implementation of "Stitched ViTs are Flexible Vision Backbones".☆27Updated last year
- An Enhanced CLIP Framework for Learning with Synthetic Captions☆34Updated last month
- Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆56Updated last year
- Code for CVPR 2023 paper "SViTT: Temporal Learning of Sparse Video-Text Transformers"☆18Updated last year
- Clipora is a powerful toolkit for fine-tuning OpenCLIP models using Low Rank Adapters (LoRA).☆22Updated 9 months ago
- Official Pytorch Implementation of Self-emerging Token Labeling☆33Updated last year
- [CVPR 2023] Official code for "Learning Procedure-aware Video Representation from Instructional Videos and Their Narrations"☆53Updated last year
- ☆19Updated 3 weeks ago