uncbiag / Awesome-Foundation-ModelsView external linksLinks
A curated list of foundation models for vision and language tasks
☆1,140Jun 23, 2025Updated 7 months ago
Alternatives and similar repositories for Awesome-Foundation-Models
Users that are interested in Awesome-Foundation-Models are comparing it to the libraries listed below
Sorting:
- ☆547Nov 7, 2024Updated last year
- Latest Advances on Multimodal Large Language Models☆17,337Feb 7, 2026Updated last week
- A collection of resources and papers on Diffusion Models☆12,273Aug 1, 2024Updated last year
- This repo lists relevant papers summarized in our survey paper: A Systematic Survey of Prompt Engineering on Vision-Language Foundation …☆510Mar 18, 2025Updated 10 months ago
- PyTorch code and models for the DINOv2 self-supervised learning method.☆12,393Dec 22, 2025Updated last month
- An open source implementation of CLIP.☆13,383Updated this week
- Collection of AWESOME vision-language models for vision tasks☆3,081Oct 14, 2025Updated 4 months ago
- (TPAMI 2024) A Survey on Open Vocabulary Learning☆986Dec 24, 2025Updated last month
- [ECCV 2024] Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"☆9,725Aug 12, 2024Updated last year
- This repository is for the first comprehensive survey on Meta AI's Segment Anything Model (SAM).☆1,211Updated this week
- Tracking and collecting papers/projects/others related to Segment Anything.☆1,684Mar 13, 2025Updated 11 months ago
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''☆1,354Mar 14, 2024Updated last year
- [CSUR] A Survey on Video Diffusion Models☆2,267Jun 27, 2025Updated 7 months ago
- ☆502Jun 9, 2025Updated 8 months ago
- [NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"☆4,773Aug 19, 2024Updated last year
- General AI methods for Anything: AnyObject, AnyGeneration, AnyModel, AnyTask, AnyX☆1,842Nov 15, 2023Updated 2 years ago
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆1,634Updated this week
- A curated list of prompt-based paper in computer vision and vision-language learning.☆928Dec 18, 2023Updated 2 years ago
- An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites☆5,011Jul 30, 2024Updated last year
- A curated list of awesome self-supervised methods☆6,361Jul 3, 2024Updated last year
- [T-PAMI-2024] Transformer-Based Visual Segmentation: A Survey☆759Aug 25, 2024Updated last year
- A curated list of recent diffusion models for video generation, editing, and various other applications.☆5,451Feb 3, 2026Updated last week
- A collection of literature after or concurrent with Masked Autoencoder (MAE) (Kaiming He el al.).☆858Jul 10, 2024Updated last year
- Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and …☆17,397Sep 5, 2024Updated last year
- [CVPR 2024] Probing the 3D Awareness of Visual Foundation Models☆348Dec 1, 2025Updated 2 months ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆11,166Nov 18, 2024Updated last year
- CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image☆32,562Jul 23, 2024Updated last year
- Collect some papers about transformer for detection and segmentation. Awesome Detection Transformer for Computer Vision (CV)☆1,394Jul 4, 2024Updated last year
- EVA Series: Visual Representation Fantasies from BAAI☆2,648Aug 1, 2024Updated last year
- A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.☆411Sep 26, 2024Updated last year
- A curated list of foundation models for vision and language tasks in medical imaging☆299Jun 3, 2024Updated last year
- A collection of resources on applications of multi-modal learning in medical imaging.☆913Feb 8, 2026Updated last week
- Project Page for "LISA: Reasoning Segmentation via Large Language Model"☆2,581Feb 16, 2025Updated 11 months ago
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,446Aug 12, 2024Updated last year
- [ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions☆1,471Jun 3, 2025Updated 8 months ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,232Jun 28, 2024Updated last year
- [Survey] Masked Modeling for Self-supervised Representation Learning on Vision and Beyond (https://arxiv.org/abs/2401.00897)☆353Apr 23, 2025Updated 9 months ago
- The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoi…☆53,411Sep 18, 2024Updated last year
- PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO☆7,443Jul 3, 2024Updated last year