RotsteinNoam / FuseCapLinks
FuseCap: Leveraging Large Language Models for Enriched Fused Image Captions
☆55Updated last year
Alternatives and similar repositories for FuseCap
Users that are interested in FuseCap are comparing it to the libraries listed below
Sorting:
- Official Pytorch implementation of "CompoDiff: Versatile Composed Image Retrieval With Latent Diffusion" (TMLR 2024)☆85Updated 5 months ago
- Code and Models for "GeneCIS A Benchmark for General Conditional Image Similarity"☆59Updated 2 years ago
- ☆51Updated 2 years ago
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆137Updated 2 years ago
- Code base of SynthCLIP: CLIP training with purely synthetic text-image pairs from LLMs and TTIs.☆100Updated 3 months ago
- Davidsonian Scene Graph (DSG) for Text-to-Image Evaluation (ICLR 2024)☆90Updated 7 months ago
- LLMScore: Unveiling the Power of Large Language Models in Text-to-Image Synthesis Evaluation☆132Updated last year
- Training code for CLIP-FlanT5☆26Updated 11 months ago
- Official implementation of our paper "Finetuned Multimodal Language Models are High-Quality Image-Text Data Filters".☆62Updated 3 months ago
- ECCV2024_Parrot Captions Teach CLIP to Spot Text☆66Updated 10 months ago
- ☆59Updated last year
- Official repo for StableLLAVA☆95Updated last year
- [ICML 2025] This is the official repository of our paper "What If We Recaption Billions of Web Images with LLaMA-3 ?"☆136Updated last year
- Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆56Updated last year
- [ICLR 2024] Official code for the paper "LLM Blueprint: Enabling Text-to-Image Generation with Complex and Detailed Prompts"☆81Updated last year
- [ICLR 23] Contrastive Aligned of Vision to Language Through Parameter-Efficient Transfer Learning☆39Updated last year
- Code for the paper titled "CiT Curation in Training for Effective Vision-Language Data".☆78Updated 2 years ago
- https://arxiv.org/abs/2209.15162☆50Updated 2 years ago
- ☆133Updated last year
- [NeurIPS2023] Official implementation and model release of the paper "What Makes Good Examples for Visual In-Context Learning?"☆177Updated last year
- Densely Captioned Images (DCI) dataset repository.☆186Updated last year
- [CVPR-2023] The official dataset of Advancing Visual Grounding with Scene Knowledge: Benchmark and Method.☆31Updated 2 years ago
- ☆71Updated last year
- Matryoshka Multimodal Models☆111Updated 5 months ago
- [ICCV 2023] - Composed Image Retrieval on Common Objects in context (CIRCO) dataset☆69Updated 11 months ago
- Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)☆169Updated 3 weeks ago
- (ACL'2023) MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioning☆35Updated 11 months ago
- ☆46Updated last year
- [CVPR 2024] CapsFusion: Rethinking Image-Text Data at Scale☆211Updated last year
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆58Updated 2 years ago