UCSC-VLAA / Recap-DataComp-1B
This is the official repository of our paper "What If We Recaption Billions of Web Images with LLaMA-3 ?"
☆128Updated 8 months ago
Alternatives and similar repositories for Recap-DataComp-1B:
Users that are interested in Recap-DataComp-1B are comparing it to the libraries listed below
- Matryoshka Multimodal Models☆96Updated 3 weeks ago
- [ICLR 2025] Source code for paper "A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive Transformer for Efficient Finegr…☆66Updated 2 months ago
- [CVPR 2024] CapsFusion: Rethinking Image-Text Data at Scale☆203Updated 11 months ago
- Davidsonian Scene Graph (DSG) for Text-to-Image Evaluation (ICLR 2024)☆84Updated 2 months ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆127Updated 3 months ago
- Densely Captioned Images (DCI) dataset repository.☆168Updated 7 months ago
- ☆132Updated last year
- Official implementation of the Law of Vision Representation in MLLMs☆149Updated 2 months ago
- ☆63Updated 2 weeks ago
- Official repo for StableLLAVA☆94Updated last year
- TIFA: Accurate and Interpretable Text-to-Image Faithfulness Evaluation with Question Answering☆146Updated 9 months ago
- Official implementation of our paper "Finetuned Multimodal Language Models are High-Quality Image-Text Data Filters".☆44Updated last month
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆133Updated 2 months ago
- Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆54Updated last year
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆115Updated 7 months ago
- ☆93Updated 9 months ago
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆40Updated last month
- [CVPR 2024] Prompt Highlighter: Interactive Control for Multi-Modal LLMs☆138Updated 6 months ago
- ☆164Updated last year
- CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts☆139Updated 8 months ago
- Code base of SynthCLIP: CLIP training with purely synthetic text-image pairs from LLMs and TTIs.☆91Updated 10 months ago
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆73Updated 3 weeks ago
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆290Updated this week
- [ICLR 2024] Official code for the paper "LLM Blueprint: Enabling Text-to-Image Generation with Complex and Detailed Prompts"☆73Updated 8 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆189Updated last month
- [ICLR 2025] HQ-Edit: A High-Quality and High-Coverage Dataset for General Image Editing☆86Updated 9 months ago
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆149Updated last month
- 🦾 EvalGIM (pronounced as "EvalGym") is an evaluation library for generative image models. It enables easy-to-use, reproducible automatic…☆67Updated last month
- Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆128Updated last month
- ☆138Updated last month