SkalskiP / awesome-foundation-and-multimodal-models
ποΈ + π¬ + π§ = π€ Curated list of top foundation and multimodal models! [Paper + Code + Examples + Tutorials]
β607Updated last year
Alternatives and similar repositories for awesome-foundation-and-multimodal-models:
Users that are interested in awesome-foundation-and-multimodal-models are comparing it to the libraries listed below
- Recipes for shrinking, optimizing, customizing cutting edge vision models. πβ1,260Updated 2 weeks ago
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expertβ¦β1,360Updated 3 months ago
- This repository is a curated collection of the most exciting and influential CVPR 2024 papers. π₯ [Paper + Code + Demo]β704Updated 8 months ago
- From scratch implementation of a vision language model in pure PyTorchβ199Updated 10 months ago
- A novel implementation of fusing ViT with Mamba into a fast, agile, and high performance Multi-Modal Model. Powered by Zeta, the simplestβ¦β447Updated last week
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skillsβ726Updated last year
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"β571Updated last year
- Famous Vision Language Models and Their Architecturesβ690Updated 2 weeks ago
- β707Updated last year
- [CVPR 2024 π₯] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses thaβ¦β840Updated 3 months ago
- VisionLLM Seriesβ1,017Updated last week
- β500Updated 4 months ago
- System 2 Reasoning Link Collectionβ804Updated last month
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''β1,257Updated 11 months ago
- This is the official repository for the LENS (Large Language Models Enhanced to See) system.β352Updated last year
- Hiera: A fast, powerful, and simple hierarchical vision transformer.β956Updated last year
- 4M: Massively Multimodal Masked Modelingβ1,691Updated this week
- An Open-source Toolkit for LLM Developmentβ2,760Updated last month
- streamline the fine-tuning process for multimodal models: PaliGemma 2, Florence-2, and Qwen2.5-VLβ2,430Updated this week
- List of resources, libraries and more for developers who would like to build with open-source machine learning off-the-shelfβ199Updated 11 months ago
- Quick exploration into fine tuning florence 2β303Updated 5 months ago
- Projects based on SigLIP (Zhai et. al, 2023) and Hugging Face transformers integration π€β214Updated 2 weeks ago
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.β1,227Updated 3 months ago
- A curated list of foundation models for vision and language tasksβ953Updated 2 weeks ago
- Research Trends in LLM-guided Multimodal Learning.β357Updated last year
- Official implementation of "Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling"β849Updated 2 weeks ago
- DataComp: In search of the next generation of multimodal datasetsβ684Updated last year
- LLaVA-Interactive-Demoβ365Updated 7 months ago
- Multimodal-GPTβ1,493Updated last year
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imagβ¦β503Updated 10 months ago