ChenDelong1999 / MEP-3M
π A Large-scale Multi-modal E-Commerce Products Dataset (LTDL@IJCAI-21 Best Dataset & Pattern Recognition 2023)
β26Updated last year
Alternatives and similar repositories for MEP-3M:
Users that are interested in MEP-3M are comparing it to the libraries listed below
- Official repository of ICCV 2021 - Image Retrieval on Real-life Images with Pre-trained Vision-and-Language Modelsβ107Updated 2 months ago
- [ECCV 2022] FashionViL: Fashion-Focused V+L Representation Learningβ60Updated 2 years ago
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Trainingβ134Updated last year
- Align and Prompt: Video-and-Language Pre-training with Entity Promptsβ187Updated 2 years ago
- A PyTorch implementation of EmpiricalMVMβ40Updated last year
- (ACL'2023) MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioningβ35Updated 6 months ago
- [ICLR2024] Codes and Models for COSA: Concatenated Sample Pretrained Vision-Language Foundation Modelβ41Updated last month
- [CVPR 2022 - Demo Track] - Effective conditioned and composed image retrieval combining CLIP-based featuresβ78Updated 3 months ago
- 𦩠Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)β63Updated last year
- β106Updated 2 years ago
- LAVIS - A One-stop Library for Language-Vision Intelligenceβ47Updated 6 months ago
- [SIGIR 2022] CenterCLIP: Token Clustering for Efficient Text-Video Retrieval. Also, a text-video retrieval toolbox based on CLIP + fast pβ¦β128Updated 2 years ago
- CLIP4IDC: CLIP for Image Difference Captioning (AACL 2022)β32Updated 2 years ago
- Official implementation of the Composed Image Retrieval using Pretrained LANguage Transformers (CIRPLANT) | ICCV 2021 - Image Retrieval oβ¦β37Updated 7 months ago
- β89Updated last year
- Use CLIP to represent video for Retrieval Taskβ69Updated 3 years ago
- source code and pre-trained/fine-tuned checkpoint for NAACL 2021 paper LightningDOTβ73Updated 2 years ago
- π Official pytorch implementation of paper "ProtoCLIP: Prototypical Contrastive Language Image Pretraining" (IEEE TNNLS)β52Updated last year
- Offical PyTorch implementation of Clover: Towards A Unified Video-Language Alignment and Fusion Model (CVPR2023)β40Updated 2 years ago
- Code for the Video Similarity Challenge.β77Updated last year
- Code and Models for "GeneCIS A Benchmark for General Conditional Image Similarity"β56Updated last year
- [CVPR2023] The code for γPosition-guided Text Prompt for Vision-Language Pre-trainingγβ150Updated last year
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!β24Updated 2 months ago
- β44Updated 2 years ago
- β34Updated last year
- β50Updated 2 years ago
- Official PyTorch implementation of the paper "DisCo-CLIP: A Distributed Contrastive Loss for Memory Efficient CLIP Training".β55Updated last year
- Code for the paper titled "CiT Curation in Training for Effective Vision-Language Data".β78Updated 2 years ago
- A Unified Framework for Video-Language Understandingβ56Updated last year
- [CVPR 2023 (Highlight)] FAME-ViL: Multi-Tasking V+L Model for Heterogeneous Fashion Tasksβ52Updated last year