gregor-ge / mBLIPLinks
☆86Updated last year
Alternatives and similar repositories for mBLIP
Users that are interested in mBLIP are comparing it to the libraries listed below
Sorting:
- ☆64Updated last year
- (WACV 2025 - Oral) Vision-language conversation in 10 languages including English, Chinese, French, Spanish, Russian, Japanese, Arabic, H…☆84Updated 4 months ago
- [ACL 2024 Findings & ICLR 2024 WS] An Evaluator VLM that is open-source, offers reproducible evaluation, and inexpensive to use. Specific…☆73Updated 10 months ago
- How Good is Google Bard's Visual Understanding? An Empirical Study on Open Challenges☆29Updated last year
- A huge dataset for Document Visual Question Answering☆19Updated 11 months ago
- E5-V: Universal Embeddings with Multimodal Large Language Models☆257Updated 6 months ago
- ☆85Updated 2 years ago
- M4 experiment logbook☆58Updated last year
- Official repository for the General Robust Image Task (GRIT) Benchmark☆54Updated 2 years ago
- An official codebase for paper " CHAMPAGNE: Learning Real-world Conversation from Large-Scale Web Videos (ICCV 23)"☆52Updated last year
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆137Updated 2 years ago
- Implementation for the CVPR 2023 paper "Improving Selective Visual Question Answering by Learning from Your Peers" (https://arxiv.org/abs…☆25Updated last year
- Matryoshka Multimodal Models☆111Updated 5 months ago
- CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts☆151Updated last year
- Official implementation of our paper "Finetuned Multimodal Language Models are High-Quality Image-Text Data Filters".☆62Updated 3 months ago
- FuseCap: Leveraging Large Language Models for Enriched Fused Image Captions☆55Updated last year
- This is a public repository for Image Clustering Conditioned on Text Criteria (IC|TC)☆90Updated last year
- ☆29Updated 2 years ago
- ☆76Updated 8 months ago
- SlideVQA: A Dataset for Document Visual Question Answering on Multiple Images (AAAI2023)☆90Updated 3 months ago
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"☆244Updated 5 months ago
- Implementation of the deepmind Flamingo vision-language model, based on Hugging Face language models and ready for training☆167Updated 2 years ago
- Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M d…☆205Updated 10 months ago
- [Under Review] Official PyTorch implementation code for realizing the technical part of Phantom of Latent representing equipped with enla…☆60Updated 9 months ago
- Code for the paper titled "CiT Curation in Training for Effective Vision-Language Data".☆78Updated 2 years ago
- Official implementation of "Gemini in Reasoning: Unveiling Commonsense in Multimodal Large Language Models"☆36Updated last year
- ☆50Updated last year
- Dataset introduced in PlotQA: Reasoning over Scientific Plots☆78Updated 2 years ago
- Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"☆145Updated 3 months ago
- Democratization of "PaLI: A Jointly-Scaled Multilingual Language-Image Model"☆92Updated last year