apple / ml-veclipLinks
The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"
☆244Updated 5 months ago
Alternatives and similar repositories for ml-veclip
Users that are interested in ml-veclip are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"☆315Updated last year
- E5-V: Universal Embeddings with Multimodal Large Language Models☆257Updated 6 months ago
- Data release for the ImageInWords (IIW) paper.☆216Updated 7 months ago
- When do we not need larger vision models?☆400Updated 5 months ago
- a family of highly capabale yet efficient large multimodal models☆185Updated 10 months ago
- Python Library to evaluate VLM models' robustness across diverse benchmarks☆208Updated last week
- Projects based on SigLIP (Zhai et. al, 2023) and Hugging Face transformers integration 🤗☆256Updated 4 months ago
- This is the official repository for the LENS (Large Language Models Enhanced to See) system.☆352Updated last year
- [CVPR'24 Highlight] PyTorch Implementation of Object Recognition as Next Token Prediction☆180Updated 2 months ago
- Repository for the paper: "TiC-CLIP: Continual Training of CLIP Models".☆102Updated last year
- ☆179Updated 9 months ago
- PyTorch code for hierarchical k-means -- a data curation method for self-supervised learning☆157Updated last year
- ☆86Updated last year
- Matryoshka Multimodal Models☆111Updated 5 months ago
- ☆58Updated last year
- LLaVA-MORE: A Comparative Study of LLMs and Visual Backbones for Enhanced Visual Instruction Tuning☆139Updated 2 months ago
- CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts☆151Updated last year
- Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"☆145Updated 3 months ago
- An open source implementation of CLIP (With TULIP Support)☆159Updated 2 months ago
- [ECCV 2024] Official PyTorch implementation code for realizing the technical part of Mixture of All Intelligence (MoAI) to improve perfor…☆323Updated last year
- Minimal sharded dataset loaders, decoders, and utils for multi-modal document, image, and text datasets.☆158Updated last year
- LLaVA-Interactive-Demo☆374Updated 11 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆325Updated 11 months ago
- [ICML 2025] This is the official repository of our paper "What If We Recaption Billions of Web Images with LLaMA-3 ?"☆136Updated last year
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformer☆382Updated 2 months ago
- This is the repository for the Photorealistic Unreal Graphics (PUG) datasets for representation learning.☆237Updated last year
- Pytorch code for paper From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language Models☆199Updated 6 months ago
- ☆64Updated last year
- (WACV 2025 - Oral) Vision-language conversation in 10 languages including English, Chinese, French, Spanish, Russian, Japanese, Arabic, H…☆84Updated 4 months ago
- Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M d…☆205Updated 10 months ago