uncbiag / Awesome-Foundation-ModelsLinks
A curated list of foundation models for vision and language tasks
☆1,130Updated 6 months ago
Alternatives and similar repositories for Awesome-Foundation-Models
Users that are interested in Awesome-Foundation-Models are comparing it to the libraries listed below
Sorting:
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''☆1,349Updated last year
- ☆542Updated last year
- This repo lists relevant papers summarized in our survey paper: A Systematic Survey of Prompt Engineering on Vision-Language Foundation …☆505Updated 9 months ago
- A collection of literature after or concurrent with Masked Autoencoder (MAE) (Kaiming He el al.).☆857Updated last year
- ICCV 2023-2025 Papers: Discover cutting-edge research from ICCV 2023-25, the leading computer vision conference. Stay updated on the late…☆967Updated 2 months ago
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆1,425Updated this week
- Famous Vision Language Models and Their Architectures☆1,132Updated 10 months ago
- Recent LLM-based CV and related works. Welcome to comment/contribute!☆874Updated 9 months ago
- CVPR 2023-2024 Papers: Dive into advanced research presented at the leading computer vision conference. Keep up to date with the latest d…☆455Updated last year
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,231Updated last year
- Collection of AWESOME vision-language models for vision tasks☆3,046Updated 2 months ago
- A curated publication list on open vocabulary semantic segmentation and related area (e.g. zero-shot semantic segmentation) resources..☆803Updated 2 months ago
- A curated list of prompt-based paper in computer vision and vision-language learning.☆928Updated 2 years ago
- Official code for "FeatUp: A Model-Agnostic Frameworkfor Features at Any Resolution" ICLR 2024☆1,620Updated last year
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆938Updated 5 months ago
- Low rank adaptation for Vision Transformer☆428Updated last year
- This repository is a curated collection of the most exciting and influential CVPR 2024 papers. 🔥 [Paper + Code + Demo]☆743Updated 7 months ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,308Updated 7 months ago
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆1,045Updated last year
- (TPAMI 2024) A Survey on Open Vocabulary Learning☆978Updated 2 weeks ago
- ❄️🔥 Visual Prompt Tuning [ECCV 2022] https://arxiv.org/abs/2203.12119☆1,200Updated 2 years ago
- VisionLLM Series☆1,132Updated 10 months ago
- Collection of awesome test-time (domain/batch/instance) adaptation methods☆1,170Updated last month
- A paper list of some recent Transformer-based CV works.☆1,397Updated last month
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024☆1,788Updated last month
- ☆62Updated 2 years ago
- Best Papers of Top Venues like CVPR, NeurIPS, ICLR, ICML, ICCV, ECCV, ...☆261Updated 3 weeks ago
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆2,038Updated 2 weeks ago
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,682Updated last week
- Project Page for "LISA: Reasoning Segmentation via Large Language Model"☆2,549Updated 10 months ago