QQ-MM / QQMM-embedLinks
☆21Updated 2 months ago
Alternatives and similar repositories for QQMM-embed
Users that are interested in QQMM-embed are comparing it to the libraries listed below
Sorting:
- Research Code for Multimodal-Cognition Team in Ant Group☆169Updated 2 months ago
- TaiSu(太素)--a large-scale Chinese multimodal dataset(亿级大规模中文视觉语言预训练数据集)☆189Updated 2 years ago
- ☆72Updated 2 years ago
- ☆168Updated 2 years ago
- Youku-mPLUG: A 10 Million Large-scale Chinese Video-Language Pre-training Dataset and Benchmarks☆300Updated last year
- official code for "Modality Curation: Building Universal Embeddings for Advanced Multimodal Information Retrieval"☆39Updated 5 months ago
- Evaluation code and datasets for the ACL 2024 paper, VISTA: Visualized Text Embedding for Universal Multi-Modal Retrieval. The original c…☆45Updated last year
- ☆62Updated 6 months ago
- [ACM MM 2025] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆96Updated last week
- ☆33Updated last month
- Toward Universal Multimodal Embedding☆70Updated 4 months ago
- [ACL 2025 Oral] 🔥🔥 MegaPairs: Massive Data Synthesis for Universal Multimodal Retrieval☆238Updated last month
- ☆87Updated last year
- Lion: Kindling Vision Intelligence within Large Language Models☆51Updated last year
- mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video (ICML 2023)☆228Updated 2 years ago
- ☆70Updated 6 months ago
- ☆29Updated 2 years ago
- [WWW 2025] Official PyTorch Code for "CTR-Driven Advertising Image Generation with Multimodal Large Language Models"☆60Updated 4 months ago
- Precision Search through Multi-Style Inputs☆73Updated 4 months ago
- LLaVE: Large Language and Vision Embedding Models with Hardness-Weighted Contrastive Learning☆73Updated 6 months ago
- My implementation of "Patch n’ Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution"☆270Updated last month
- Code for CVPR 2022 paper "Scene Consistency Representation Learning for Video Scene Segmentation"☆103Updated 2 years ago
- Narrative movie understanding benchmark☆77Updated 6 months ago
- Bling's Object detection tool☆56Updated 2 years ago
- Chinese version of CLIP which achieves Chinese cross-modal retrieval and representation generation.☆169Updated 3 years ago
- Video Copy Segment Localization (VCSL) dataset and benchmark [CVPR2022]☆131Updated last year
- ☆118Updated 2 years ago
- Search photos on Unsplash based on OpenAI's CLIP model, support search with joint image+text queries and attention visualization.☆223Updated 4 years ago
- The huggingface implementation of Fine-grained Late-interaction Multi-modal Retriever.☆104Updated 6 months ago
- Bridging Vision and Language Model☆285Updated 2 years ago