Computer-Vision-in-the-Wild / CVinW_Readings
A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''
☆1,288Updated last year
Alternatives and similar repositories for CVinW_Readings
Users that are interested in CVinW_Readings are comparing it to the libraries listed below
Sorting:
- ☆515Updated 6 months ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,196Updated 10 months ago
- Recent LLM-based CV and related works. Welcome to comment/contribute!☆862Updated 2 months ago
- A curated list of prompt-based paper in computer vision and vision-language learning.☆920Updated last year
- EVA Series: Visual Representation Fantasies from BAAI☆2,484Updated 9 months ago
- Grounded Language-Image Pre-training☆2,396Updated last year
- A curated list of foundation models for vision and language tasks☆998Updated 2 weeks ago
- This repo lists relevant papers summarized in our survey paper: A Systematic Survey of Prompt Engineering on Vision-Language Foundation …☆457Updated last month
- CLIP-like model evaluation☆708Updated last month
- Robust fine-tuning of zero-shot models☆698Updated 3 years ago
- Official code for VisProg (CVPR 2023 Best Paper!)☆723Updated 8 months ago
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆1,951Updated 11 months ago
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆877Updated 5 months ago
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want☆817Updated 9 months ago
- VisionLLM Series☆1,059Updated 2 months ago
- Collection of AWESOME vision-language models for vision tasks☆2,720Updated this week
- Project Page for "LISA: Reasoning Segmentation via Large Language Model"☆2,197Updated 3 months ago
- ❄️🔥 Visual Prompt Tuning [ECCV 2022] https://arxiv.org/abs/2203.12119☆1,107Updated last year
- General AI methods for Anything: AnyObject, AnyGeneration, AnyModel, AnyTask, AnyX☆1,778Updated last year
- (TPAMI 2024) A Survey on Open Vocabulary Learning☆925Updated last month
- Code for ALBEF: a new vision-language pre-training method☆1,650Updated 2 years ago
- ICCV 2023 Papers: Discover cutting-edge research from ICCV 2023, the leading computer vision conference. Stay updated on the latest in co…☆953Updated 8 months ago
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,435Updated 2 months ago
- 📖 A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).☆681Updated last month
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆982Updated last year
- ☆779Updated 10 months ago
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,241Updated 2 years ago
- A collection of literature after or concurrent with Masked Autoencoder (MAE) (Kaiming He el al.).☆825Updated 10 months ago
- Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks☆2,358Updated this week
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,594Updated last week