uncbiag / Awesome-Foundation-ModelsLinks
A curated list of foundation models for vision and language tasks
☆1,089Updated 2 months ago
Alternatives and similar repositories for Awesome-Foundation-Models
Users that are interested in Awesome-Foundation-Models are comparing it to the libraries listed below
Sorting:
- ☆529Updated 10 months ago
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''☆1,333Updated last year
- This repo lists relevant papers summarized in our survey paper: A Systematic Survey of Prompt Engineering on Vision-Language Foundation …☆482Updated 6 months ago
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆1,344Updated 2 weeks ago
- A collection of literature after or concurrent with Masked Autoencoder (MAE) (Kaiming He el al.).☆851Updated last year
- Famous Vision Language Models and Their Architectures☆1,014Updated 6 months ago
- ICCV 2023 Papers: Discover cutting-edge research from ICCV 2023, the leading computer vision conference. Stay updated on the latest in co…☆955Updated last year
- Recent LLM-based CV and related works. Welcome to comment/contribute!☆872Updated 6 months ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,215Updated last year
- CVPR 2023-2024 Papers: Dive into advanced research presented at the leading computer vision conference. Keep up to date with the latest d…☆455Updated last year
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆912Updated last month
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,135Updated 4 months ago
- A curated list of prompt-based paper in computer vision and vision-language learning.☆924Updated last year
- (TPAMI 2024) A Survey on Open Vocabulary Learning☆950Updated 5 months ago
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,675Updated 3 weeks ago
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆1,607Updated 2 weeks ago
- General AI methods for Anything: AnyObject, AnyGeneration, AnyModel, AnyTask, AnyX☆1,811Updated last year
- A curated publication list on open vocabulary semantic segmentation and related area (e.g. zero-shot semantic segmentation) resources..☆719Updated this week
- VisionLLM Series☆1,106Updated 6 months ago
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆1,026Updated last year
- Out-of-distribution detection, robustness, and generalization resources. The repository contains a curated list of papers, tutorials, boo…☆945Updated last week
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,364Updated last month
- This repository is a curated collection of the most exciting and influential CVPR 2024 papers. 🔥 [Paper + Code + Demo]☆738Updated 3 months ago
- Low rank adaptation for Vision Transformer☆420Updated last year
- Official code for "FeatUp: A Model-Agnostic Frameworkfor Features at Any Resolution" ICLR 2024☆1,582Updated last year
- Project Page for "LISA: Reasoning Segmentation via Large Language Model"☆2,409Updated 7 months ago
- [T-PAMI-2024] Transformer-Based Visual Segmentation: A Survey☆751Updated last year
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,649Updated this week
- Collection of awesome test-time (domain/batch/instance) adaptation methods☆1,077Updated this week
- ☆1,833Updated last year