SkalskiP / awesome-foundation-and-multimodal-models
ποΈ + π¬ + π§ = π€ Curated list of top foundation and multimodal models! [Paper + Code + Examples + Tutorials]
β614Updated last year
Alternatives and similar repositories for awesome-foundation-and-multimodal-models:
Users that are interested in awesome-foundation-and-multimodal-models are comparing it to the libraries listed below
- From scratch implementation of a vision language model in pure PyTorchβ214Updated last year
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expertβ¦β1,428Updated last month
- Recipes for shrinking, optimizing, customizing cutting edge vision models. πβ1,417Updated last month
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"β602Updated last year
- β515Updated 5 months ago
- Quick exploration into fine tuning florence 2β309Updated 7 months ago
- Projects based on SigLIP (Zhai et. al, 2023) and Hugging Face transformers integration π€β231Updated 2 months ago
- Famous Vision Language Models and Their Architecturesβ803Updated 2 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skillsβ739Updated last year
- β706Updated last year
- [CVPR 2024 π₯] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses thaβ¦β867Updated 5 months ago
- 4M: Massively Multimodal Masked Modelingβ1,717Updated last month
- This is the official repository for the LENS (Large Language Models Enhanced to See) system.β352Updated last year
- A novel implementation of fusing ViT with Mamba into a fast, agile, and high performance Multi-Modal Model. Powered by Zeta, the simplestβ¦β448Updated last month
- A family of lightweight multimodal models.β1,015Updated 5 months ago
- DataComp: In search of the next generation of multimodal datasetsβ703Updated last week
- LLaVA-Interactive-Demoβ369Updated 9 months ago
- System 2 Reasoning Link Collectionβ828Updated last month
- Hiera: A fast, powerful, and simple hierarchical vision transformer.β977Updated last year
- A curated list of foundation models for vision and language tasksβ991Updated last week
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.β508Updated last month
- This repository is a curated collection of the most exciting and influential CVPR 2023 papers. π₯ [Paper + Code]β646Updated 10 months ago
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''β1,285Updated last year
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modelingβ867Updated this week
- [CVPR 2024] OneLLM: One Framework to Align All Modalities with Languageβ639Updated 6 months ago
- HPT - Open Multimodal LLMs from HyperGAIβ315Updated 11 months ago
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.β1,275Updated last week
- List of resources, libraries and more for developers who would like to build with open-source machine learning off-the-shelfβ199Updated last year
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.β2,837Updated last month
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imagβ¦β519Updated last year