SkalskiP / awesome-foundation-and-multimodal-modelsLinks
ποΈ + π¬ + π§ = π€ Curated list of top foundation and multimodal models! [Paper + Code + Examples + Tutorials]
β624Updated last year
Alternatives and similar repositories for awesome-foundation-and-multimodal-models
Users that are interested in awesome-foundation-and-multimodal-models are comparing it to the libraries listed below
Sorting:
- From scratch implementation of a vision language model in pure PyTorchβ227Updated last year
- Recipes for shrinking, optimizing, customizing cutting edge vision models. πβ1,520Updated this week
- β710Updated last year
- This repository is a curated collection of the most exciting and influential CVPR 2024 papers. π₯ [Paper + Code + Demo]β732Updated last month
- A novel implementation of fusing ViT with Mamba into a fast, agile, and high performance Multi-Modal Model. Powered by Zeta, the simplestβ¦β453Updated last month
- List of resources, libraries and more for developers who would like to build with open-source machine learning off-the-shelfβ198Updated last year
- This repo is the homebase of a community driven course on Computer Vision with Neural Networks. Feel free to join us on the Hugging Face β¦β663Updated this week
- streamline the fine-tuning process for multimodal models: PaliGemma 2, Florence-2, and Qwen2.5-VLβ2,592Updated this week
- AI assistant that can query visual datasets, search the FiftyOne docs, and answer general computer vision questionsβ246Updated 7 months ago
- Quick exploration into fine tuning florence 2β322Updated 9 months ago
- This repository is a curated collection of the most exciting and influential CVPR 2023 papers. π₯ [Paper + Code]β653Updated last month
- 4M: Massively Multimodal Masked Modelingβ1,742Updated last month
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skillsβ749Updated last year
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expertβ¦β1,474Updated this week
- This is the official repository for the LENS (Large Language Models Enhanced to See) system.β352Updated last year
- Projects based on SigLIP (Zhai et. al, 2023) and Hugging Face transformers integration π€β256Updated 4 months ago
- β223Updated last year
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"β644Updated last year
- Famous Vision Language Models and Their Architecturesβ908Updated 4 months ago
- Toolkit for attaching, training, saving and loading of new heads for transformer modelsβ282Updated 4 months ago
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"β244Updated 5 months ago
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.β1,323Updated 2 months ago
- β447Updated last year
- Recipes for learning, fine-tuning, and adapting ColPali to your multimodal RAG use cases. π¨π»βπ³β315Updated last month
- LoRA and DoRA from Scratch Implementationsβ206Updated last year
- Place where folks can contribute to π€ community eventsβ424Updated last year
- All the projects related to Llamaβ380Updated 3 months ago
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 linesβ197Updated last year
- Each week I create sketches covering key Computer Vision concepts. If you want to learn more about CV stick around!β147Updated 2 years ago
- Automatically evaluate your LLMs in Google Colabβ649Updated last year