facebookresearch / DCILinks
Densely Captioned Images (DCI) dataset repository.
☆194Updated last year
Alternatives and similar repositories for DCI
Users that are interested in DCI are comparing it to the libraries listed below
Sorting:
- [CVPR 2024] CapsFusion: Rethinking Image-Text Data at Scale☆212Updated last year
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆286Updated last year
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"☆320Updated last year
- [COLM'25] Official implementation of the Law of Vision Representation in MLLMs☆170Updated 2 months ago
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆158Updated last year
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆136Updated 6 months ago
- ☆133Updated last year
- Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)☆181Updated 5 months ago
- ☆81Updated last year
- Matryoshka Multimodal Models☆119Updated 10 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆331Updated last year
- ☆356Updated last year
- CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts☆159Updated last year
- [ICML 2025] This is the official repository of our paper "What If We Recaption Billions of Web Images with LLaMA-3 ?"☆143Updated last year
- NegCLIP.☆38Updated 2 years ago
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆138Updated 2 years ago
- ☆69Updated last year
- 【NeurIPS 2024】Dense Connector for MLLMs☆180Updated last year
- TIFA: Accurate and Interpretable Text-to-Image Faithfulness Evaluation with Question Answering☆178Updated last year
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or …☆150Updated 2 months ago
- Official implementation of "Describing Differences in Image Sets with Natural Language" (CVPR 2024 Oral)☆129Updated last month
- Official implementation for the paper "Prompt Pre-Training with Over Twenty-Thousand Classes for Open-Vocabulary Visual Recognition"☆259Updated last year
- Learning from synthetic data - code and models☆325Updated last year
- Pytorch code for paper From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language Models☆205Updated 10 months ago
- SVIT: Scaling up Visual Instruction Tuning☆164Updated last year
- When do we not need larger vision models?☆412Updated 9 months ago
- ☆140Updated last year
- Official Implementation of ICLR'24: Kosmos-G: Generating Images in Context with Multimodal Large Language Models☆73Updated last year
- [NeurIPS2023] Official implementation and model release of the paper "What Makes Good Examples for Visual In-Context Learning?"☆179Updated last year
- LLaVA-NeXT-Image-Llama3-Lora, Modified from https://github.com/arielnlee/LLaVA-1.6-ft☆45Updated last year