ChenDelong1999 / MEP-3MLinks
🎁 A Large-scale Multi-modal E-Commerce Products Dataset (LTDL@IJCAI-21 Best Dataset & Pattern Recognition 2023)
☆38Updated 2 years ago
Alternatives and similar repositories for MEP-3M
Users that are interested in MEP-3M are comparing it to the libraries listed below
Sorting:
- Use CLIP to represent video for Retrieval Task☆70Updated 4 years ago
- Code for the Video Similarity Challenge.☆81Updated last year
- mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video (ICML 2023)☆228Updated 2 years ago
- Align and Prompt: Video-and-Language Pre-training with Entity Prompts☆188Updated 7 months ago
- Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training (ACL 2023))☆92Updated 2 years ago
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆139Updated 2 weeks ago
- (ACL'2023) MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioning☆36Updated last year
- Product1M☆90Updated 3 years ago
- [CVPR 2022 - Demo Track] - Effective conditioned and composed image retrieval combining CLIP-based features☆82Updated last year
- Lion: Kindling Vision Intelligence within Large Language Models☆51Updated last year
- All-In-One VLM: Image + Video + Transfer to Other Languages / Domains (TPAMI 2023)☆166Updated last year
- ☆258Updated 3 years ago
- Official repository of ICCV 2021 - Image Retrieval on Real-life Images with Pre-trained Vision-and-Language Models☆128Updated 2 months ago
- Search photos on Unsplash based on OpenAI's CLIP model, support search with joint image+text queries and attention visualization.☆223Updated 4 years ago
- [SIGIR 2022] CenterCLIP: Token Clustering for Efficient Text-Video Retrieval.☆134Updated 3 years ago
- ☆110Updated 3 years ago
- [ICLR2024] Codes and Models for COSA: Concatenated Sample Pretrained Vision-Language Foundation Model☆43Updated last year
- TransVCL: Attention-enhanced Video Copy Localization Network with Flexible Supervision [AAAI2023 Oral]]☆57Updated 2 years ago
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆64Updated 2 years ago
- ECCV2020 paper: Fashion Captioning: Towards Generating Accurate Descriptions with Semantic Rewards. Code and Data.☆86Updated 2 years ago
- LLMScore: Unveiling the Power of Large Language Models in Text-to-Image Synthesis Evaluation☆134Updated 2 years ago
- UniTAB: Unifying Text and Box Outputs for Grounded VL Modeling, ECCV 2022 (Oral Presentation)☆89Updated 2 years ago
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆285Updated last year
- Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)☆185Updated 6 months ago
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)☆202Updated last year
- Official code for "Bridging Video-text Retrieval with Multiple Choice Questions", CVPR 2022 (Oral).☆141Updated 3 years ago
- [ECCV 2022] FashionViL: Fashion-Focused V+L Representation Learning☆61Updated 3 years ago
- ☆87Updated last year
- [CVPR 2023 (Highlight)] FAME-ViL: Multi-Tasking V+L Model for Heterogeneous Fashion Tasks☆55Updated 2 years ago
- ☆47Updated 3 years ago