Computer-Vision-in-the-Wild / CVinW_Readings
A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''
☆1,146Updated 6 months ago
Related projects: ⓘ
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,107Updated 2 months ago
- Grounded Language-Image Pre-training☆2,154Updated 7 months ago
- Recent LLM-based CV and related works. Welcome to comment/contribute!☆826Updated 3 months ago
- A curated list of prompt-based paper in computer vision and vision-language learning.☆887Updated 9 months ago
- ☆448Updated 8 months ago
- EVA Series: Visual Representation Fantasies from BAAI☆2,209Updated last month
- A curated list of foundation models for vision and language tasks☆773Updated this week
- Official code for VisProg (CVPR 2023 Best Paper!)☆683Updated 3 weeks ago
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆1,679Updated 3 months ago
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆740Updated 3 months ago
- This repo lists relevant papers summarized in our survey paper: A Systematic Survey of Prompt Engineering on Vision-Language Foundation …☆350Updated 7 months ago
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,423Updated this week
- Robust fine-tuning of zero-shot models☆629Updated 2 years ago
- VisionLLM Series☆846Updated this week
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,192Updated last year
- A method to increase the speed and lower the memory footprint of existing vision transformers.☆934Updated 3 months ago
- ❄️🔥 Visual Prompt Tuning [ECCV 2022] https://arxiv.org/abs/2203.12119☆994Updated last year
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding☆1,300Updated 3 weeks ago
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want☆639Updated last month
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆857Updated 6 months ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆2,206Updated 3 weeks ago
- ☆732Updated 2 months ago
- A collection of literature after or concurrent with Masked Autoencoder (MAE) (Kaiming He el al.).☆760Updated 2 months ago
- General AI methods for Anything: AnyObject, AnyGeneration, AnyModel, AnyTask, AnyX☆1,712Updated 10 months ago
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,169Updated 2 weeks ago
- (TPAMI 2024) A Survey on Open Vocabulary Learning☆794Updated 3 weeks ago
- [ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions☆1,215Updated 6 months ago
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆696Updated 5 months ago
- awesome grounding: A curated list of research papers in visual grounding☆1,000Updated last year
- Open-source evaluation toolkit of large vision-language models (LVLMs), support ~100 VLMs, 40+ benchmarks☆1,018Updated this week