uncbiag / Awesome-Foundation-ModelsLinks
A curated list of foundation models for vision and language tasks
☆1,112Updated 4 months ago
Alternatives and similar repositories for Awesome-Foundation-Models
Users that are interested in Awesome-Foundation-Models are comparing it to the libraries listed below
Sorting:
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''☆1,340Updated last year
- ☆530Updated 11 months ago
- This repo lists relevant papers summarized in our survey paper: A Systematic Survey of Prompt Engineering on Vision-Language Foundation …☆494Updated 7 months ago
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆1,376Updated 2 weeks ago
- A collection of literature after or concurrent with Masked Autoencoder (MAE) (Kaiming He el al.).☆856Updated last year
- Recent LLM-based CV and related works. Welcome to comment/contribute!☆873Updated 7 months ago
- Famous Vision Language Models and Their Architectures☆1,064Updated 8 months ago
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆922Updated 2 months ago
- ICCV 2023-2025 Papers: Discover cutting-edge research from ICCV 2023-25, the leading computer vision conference. Stay updated on the late…☆954Updated this week
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,219Updated last year
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,203Updated 5 months ago
- VisionLLM Series☆1,119Updated 8 months ago
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆1,036Updated last year
- CVPR 2023-2024 Papers: Dive into advanced research presented at the leading computer vision conference. Keep up to date with the latest d…☆457Updated last year
- A curated list of prompt-based paper in computer vision and vision-language learning.☆925Updated last year
- A curated publication list on open vocabulary semantic segmentation and related area (e.g. zero-shot semantic segmentation) resources..☆751Updated last week
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆1,698Updated last month
- Official code for "FeatUp: A Model-Agnostic Frameworkfor Features at Any Resolution" ICLR 2024☆1,594Updated last year
- ❄️🔥 Visual Prompt Tuning [ECCV 2022] https://arxiv.org/abs/2203.12119☆1,177Updated 2 years ago
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,661Updated this week
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,699Updated last month
- Out-of-distribution detection, robustness, and generalization resources. The repository contains a curated list of papers, tutorials, boo…☆956Updated last month
- Robust fine-tuning of zero-shot models☆744Updated 3 years ago
- (TPAMI 2024) A Survey on Open Vocabulary Learning☆956Updated 7 months ago
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want☆847Updated 3 months ago
- Project Page for "LISA: Reasoning Segmentation via Large Language Model"☆2,458Updated 8 months ago
- Official code for VisProg (CVPR 2023 Best Paper!)☆749Updated last year
- Low rank adaptation for Vision Transformer☆425Updated last year
- [ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions☆1,429Updated 4 months ago
- Collection of awesome test-time (domain/batch/instance) adaptation methods☆1,118Updated 2 weeks ago