CuriseJia / ECCV24-FreeStyleRetLinks
Precision Search through Multi-Style Inputs
☆73Updated 6 months ago
Alternatives and similar repositories for ECCV24-FreeStyleRet
Users that are interested in ECCV24-FreeStyleRet are comparing it to the libraries listed below
Sorting:
- [ECCV 2024] ShareGPT4V: Improving Large Multi-modal Models with Better Captions☆247Updated last year
- official code for "Modality Curation: Building Universal Embeddings for Advanced Multimodal Information Retrieval"☆42Updated 7 months ago
- [CVPR2025] Official implementation of High Fidelity Scene Text Synthesis.☆79Updated 10 months ago
- Multimodal Open-O1 (MO1) is designed to enhance the accuracy of inference models by utilizing a novel prompt-based approach. This tool wo…☆29Updated last year
- Repository for 23'MM accepted paper "Curriculum-Listener: Consistency- and Complementarity-Aware Audio-Enhanced Temporal Sentence Groundi…☆52Updated 2 years ago
- ☆56Updated 9 months ago
- [CVPR 2024] Dynamic Prompt Optimizing for Text-to-Image Generation☆85Updated last year
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆129Updated last year
- ☆88Updated last year
- [NeurIPS 2023] Customize spatial layouts for conditional image synthesis models, e.g., ControlNet, using GPT☆136Updated last year
- [ECCV2024] Towards Reliable Advertising Image Generation Using Human Feedback☆59Updated last year
- LAVIS - A One-stop Library for Language-Vision Intelligence☆48Updated last year
- Official repository for paper MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning(https://arxiv.org/abs/2406.17770).☆159Updated last year
- ☆90Updated last year
- [ICCV 2025] Explore the Limits of Omni-modal Pretraining at Scale☆122Updated last year
- ☆95Updated 11 months ago
- [ICCV2025] A Token-level Text Image Foundation Model for Document Understanding☆129Updated 5 months ago
- The official implementation of our paper "Cockatiel: Ensembling Synthetic and Human Preferenced Training for Detailed Video Caption"☆38Updated 8 months ago
- Official Implementation of OpenING: A Comprehensive Benchmark for Judging Open-ended Interleaved Image-Text Generation☆37Updated 6 months ago
- What Is a Good Caption? A Comprehensive Visual Caption Benchmark for Evaluating Both Correctness and Thoroughness☆26Updated 8 months ago
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆93Updated 2 months ago
- ECCV2024_Parrot Captions Teach CLIP to Spot Text☆66Updated last year
- A Simple Framework of Small-scale LMMs for Video Understanding☆108Updated 7 months ago
- [ACL2025 Findings] Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models☆90Updated 8 months ago
- Image Textualization: An Automatic Framework for Generating Rich and Detailed Image Descriptions (NeurIPS 2024)☆172Updated last year
- WeThink: Toward General-purpose Vision-Language Reasoning via Reinforcement Learning☆36Updated 7 months ago
- ☆160Updated last year
- Unified Multi-modal IAA Baseline and Benchmark☆92Updated last year
- [WWW 2025] Official PyTorch Code for "CTR-Driven Advertising Image Generation with Multimodal Large Language Models"☆61Updated 6 months ago
- LMM solved catastrophic forgetting, AAAI2025☆45Updated 9 months ago