ChenDelong1999 / MEP-3MLinks
🎁 A Large-scale Multi-modal E-Commerce Products Dataset (LTDL@IJCAI-21 Best Dataset & Pattern Recognition 2023)
☆38Updated last year
Alternatives and similar repositories for MEP-3M
Users that are interested in MEP-3M are comparing it to the libraries listed below
Sorting:
- mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video (ICML 2023)☆228Updated 2 years ago
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆138Updated 2 years ago
- Lion: Kindling Vision Intelligence within Large Language Models☆51Updated last year
- ☆87Updated last year
- Product1M☆90Updated 3 years ago
- ☆256Updated 2 years ago
- [CVPR 2022 - Demo Track] - Effective conditioned and composed image retrieval combining CLIP-based features☆82Updated last year
- Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training (ACL 2023))☆92Updated 2 years ago
- ☆133Updated last year
- Touchstone: Evaluating Vision-Language Models by Language Models☆83Updated last year
- All-In-One VLM: Image + Video + Transfer to Other Languages / Domains (TPAMI 2023)☆166Updated last year
- Search photos on Unsplash based on OpenAI's CLIP model, support search with joint image+text queries and attention visualization.☆223Updated 4 years ago
- Use CLIP to represent video for Retrieval Task☆70Updated 4 years ago
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆64Updated 2 years ago
- (ACL'2023) MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioning☆36Updated last year
- [CVPR 2023 (Highlight)] FAME-ViL: Multi-Tasking V+L Model for Heterogeneous Fashion Tasks☆55Updated 2 years ago
- Align and Prompt: Video-and-Language Pre-training with Entity Prompts☆188Updated 7 months ago
- Towards Video Text Visual Question Answering: Benchmark and Baseline☆40Updated last year
- Official PyTorch implementation of the paper "DisCo-CLIP: A Distributed Contrastive Loss for Memory Efficient CLIP Training".☆58Updated 2 years ago
- Official implementation for the paper "Prompt Pre-Training with Over Twenty-Thousand Classes for Open-Vocabulary Visual Recognition"☆259Updated last year
- Code for the Video Similarity Challenge.☆80Updated last year
- LLMScore: Unveiling the Power of Large Language Models in Text-to-Image Synthesis Evaluation☆134Updated 2 years ago
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆286Updated last year
- ☆110Updated 2 years ago
- [ECCV 2022] FashionViL: Fashion-Focused V+L Representation Learning☆61Updated 3 years ago
- TransVCL: Attention-enhanced Video Copy Localization Network with Flexible Supervision [AAAI2023 Oral]]☆57Updated 2 years ago
- Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)☆183Updated 5 months ago
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆222Updated 2 years ago
- [ICLR2024] Codes and Models for COSA: Concatenated Sample Pretrained Vision-Language Foundation Model☆43Updated 11 months ago
- Coarse-to-Fine Vision-Language Pre-training with Fusion in the Backbone☆130Updated 2 years ago