RotsteinNoam / FuseCapLinks
FuseCap: Leveraging Large Language Models for Enriched Fused Image Captions
☆55Updated last year
Alternatives and similar repositories for FuseCap
Users that are interested in FuseCap are comparing it to the libraries listed below
Sorting:
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆138Updated 2 years ago
- Official Pytorch implementation of "CompoDiff: Versatile Composed Image Retrieval With Latent Diffusion" (TMLR 2024)☆87Updated 8 months ago
- Code and Models for "GeneCIS A Benchmark for General Conditional Image Similarity"☆60Updated 2 years ago
- Code base of SynthCLIP: CLIP training with purely synthetic text-image pairs from LLMs and TTIs.☆100Updated 6 months ago
- Davidsonian Scene Graph (DSG) for Text-to-Image Evaluation (ICLR 2024)☆94Updated 10 months ago
- ☆53Updated 3 years ago
- ☆133Updated last year
- Densely Captioned Images (DCI) dataset repository.☆192Updated last year
- ECCV2024_Parrot Captions Teach CLIP to Spot Text☆65Updated last year
- [ICLR 2024] Official code for the paper "LLM Blueprint: Enabling Text-to-Image Generation with Complex and Detailed Prompts"☆81Updated last year
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆58Updated 2 years ago
- Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆56Updated 2 years ago
- ☆57Updated last year
- LLMScore: Unveiling the Power of Large Language Models in Text-to-Image Synthesis Evaluation☆132Updated last year
- Official implementation of our paper "Finetuned Multimodal Language Models are High-Quality Image-Text Data Filters".☆67Updated 5 months ago
- ☆61Updated last year
- Official repo for StableLLAVA☆94Updated last year
- ☆72Updated last year
- A Unified Framework for Video-Language Understanding☆59Updated 2 years ago
- Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)☆177Updated 3 months ago
- VPEval Codebase from Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆44Updated last year
- [NeurIPS2023] Official implementation and model release of the paper "What Makes Good Examples for Visual In-Context Learning?"☆177Updated last year
- [CVPR 2023 (Highlight)] FAME-ViL: Multi-Tasking V+L Model for Heterogeneous Fashion Tasks☆55Updated 2 years ago
- [ICML 2025] This is the official repository of our paper "What If We Recaption Billions of Web Images with LLaMA-3 ?"☆142Updated last year
- Matryoshka Multimodal Models☆111Updated 8 months ago
- Training code for CLIP-FlanT5☆29Updated last year
- Using LLMs and pre-trained caption models for super-human performance on image captioning.☆42Updated last year
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆57Updated last year
- [ICLR 23] Contrastive Aligned of Vision to Language Through Parameter-Efficient Transfer Learning☆40Updated 2 years ago
- VideoCC is a dataset containing (video-URL, caption) pairs for training video-text machine learning models. It is created using an automa…☆78Updated 2 years ago