SkalskiP / awesome-foundation-and-multimodal-modelsLinks
ποΈ + π¬ + π§ = π€ Curated list of top foundation and multimodal models! [Paper + Code + Examples + Tutorials]
β636Updated last year
Alternatives and similar repositories for awesome-foundation-and-multimodal-models
Users that are interested in awesome-foundation-and-multimodal-models are comparing it to the libraries listed below
Sorting:
- This repository is a curated collection of the most exciting and influential CVPR 2024 papers. π₯ [Paper + Code + Demo]β742Updated 7 months ago
- β716Updated last year
- From scratch implementation of a vision language model in pure PyTorchβ253Updated last year
- Recipes for shrinking, optimizing, customizing cutting edge vision models. πβ1,853Updated last week
- Quick exploration into fine tuning florence 2β339Updated last year
- A novel implementation of fusing ViT with Mamba into a fast, agile, and high performance Multi-Modal Model. Powered by Zeta, the simplestβ¦β462Updated 2 months ago
- β228Updated 2 years ago
- This is the official repository for the LENS (Large Language Models Enhanced to See) system.β356Updated 5 months ago
- List of resources, libraries and more for developers who would like to build with open-source machine learning off-the-shelfβ198Updated last year
- 4M: Massively Multimodal Masked Modelingβ1,783Updated 7 months ago
- streamline the fine-tuning process for multimodal models: PaliGemma 2, Florence-2, and Qwen2.5-VLβ2,651Updated this week
- This repository is a curated collection of the most exciting and influential CVPR 2023 papers. π₯ [Paper + Code]β652Updated 7 months ago
- This repo is the homebase of a community driven course on Computer Vision with Neural Networks. Feel free to join us on the Hugging Face β¦β766Updated 2 months ago
- AI assistant that can query visual datasets, search the FiftyOne docs, and answer general computer vision questionsβ250Updated last year
- Projects based on SigLIP (Zhai et. al, 2023) and Hugging Face transformers integration π€β297Updated 10 months ago
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.β1,394Updated 5 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skillsβ764Updated last year
- Place where folks can contribute to π€ community eventsβ428Updated 2 years ago
- LLaVA-Interactive-Demoβ380Updated last year
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024β1,801Updated last month
- Notebooks for fine tuning pali gemmaβ117Updated 9 months ago
- LoRA and DoRA from Scratch Implementationsβ215Updated last year
- Large Language Model (LLM) Inference API and Chatbotβ127Updated last year
- All the projects related to Llamaβ381Updated 9 months ago
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"β684Updated 2 years ago
- MetaSeg: Packaged version of the Segment Anything repositoryβ986Updated this week
- Computer Vision dataset analysisβ310Updated last year
- Toolkit for attaching, training, saving and loading of new heads for transformer modelsβ294Updated 10 months ago
- [CVPR 2024 π₯] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses thaβ¦β939Updated 5 months ago
- A curated list of papers that released datasets along with their workβ126Updated last year