uncbiag / Awesome-Foundation-ModelsLinks
A curated list of foundation models for vision and language tasks
☆1,137Updated 7 months ago
Alternatives and similar repositories for Awesome-Foundation-Models
Users that are interested in Awesome-Foundation-Models are comparing it to the libraries listed below
Sorting:
- ☆545Updated last year
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''☆1,353Updated last year
- A collection of literature after or concurrent with Masked Autoencoder (MAE) (Kaiming He el al.).☆858Updated last year
- This repo lists relevant papers summarized in our survey paper: A Systematic Survey of Prompt Engineering on Vision-Language Foundation …☆508Updated 10 months ago
- ICCV 2023-2025 Papers: Discover cutting-edge research from ICCV 2023-25, the leading computer vision conference. Stay updated on the late…☆970Updated 2 months ago
- Recent LLM-based CV and related works. Welcome to comment/contribute!☆874Updated 10 months ago
- Famous Vision Language Models and Their Architectures☆1,161Updated 2 weeks ago
- CVPR 2023-2024 Papers: Dive into advanced research presented at the leading computer vision conference. Keep up to date with the latest d…☆454Updated last year
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆1,442Updated this week
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆1,050Updated last year
- A curated list of prompt-based paper in computer vision and vision-language learning.☆929Updated 2 years ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,339Updated 8 months ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,231Updated last year
- Collection of AWESOME vision-language models for vision tasks☆3,067Updated 3 months ago
- A paper list of some recent Transformer-based CV works.☆1,420Updated 2 months ago
- Official code for VisProg (CVPR 2023 Best Paper!)☆758Updated last year
- VisionLLM Series☆1,133Updated 11 months ago
- [T-PAMI-2024] Transformer-Based Visual Segmentation: A Survey☆758Updated last year
- A curated publication list on open vocabulary semantic segmentation and related area (e.g. zero-shot semantic segmentation) resources..☆823Updated last week
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆939Updated 5 months ago
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024☆1,807Updated 2 months ago
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,691Updated this week
- Out-of-distribution detection, robustness, and generalization resources. The repository contains a curated list of papers, tutorials, boo…☆974Updated 2 months ago
- Collection of awesome test-time (domain/batch/instance) adaptation methods☆1,196Updated 2 months ago
- Low rank adaptation for Vision Transformer☆430Updated last year
- [ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions☆1,463Updated 7 months ago
- Official code for "FeatUp: A Model-Agnostic Frameworkfor Features at Any Resolution" ICLR 2024☆1,625Updated last year
- (TPAMI 2024) A Survey on Open Vocabulary Learning☆985Updated last month
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,395Updated 5 months ago
- General AI methods for Anything: AnyObject, AnyGeneration, AnyModel, AnyTask, AnyX☆1,841Updated 2 years ago