ChenDelong1999 / MEP-3M
π A Large-scale Multi-modal E-Commerce Products Dataset (LTDL@IJCAI-21 Best Dataset & Pattern Recognition 2023)
β27Updated last year
Alternatives and similar repositories for MEP-3M:
Users that are interested in MEP-3M are comparing it to the libraries listed below
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Trainingβ136Updated 2 years ago
- UniTAB: Unifying Text and Box Outputs for Grounded VL Modeling, ECCV 2022 (Oral Presentation)β85Updated last year
- Official repository of ICCV 2021 - Image Retrieval on Real-life Images with Pre-trained Vision-and-Language Modelsβ107Updated 3 months ago
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Modelsβ43Updated 9 months ago
- CLIP4IDC: CLIP for Image Difference Captioning (AACL 2022)β34Updated 2 years ago
- Code and Models for "GeneCIS A Benchmark for General Conditional Image Similarity"β56Updated last year
- Align and Prompt: Video-and-Language Pre-training with Entity Promptsβ185Updated 2 years ago
- Command-line tool for downloading and extending the RedCaps dataset.β46Updated last year
- Official repository for the General Robust Image Task (GRIT) Benchmarkβ52Updated last year
- β30Updated 2 years ago
- Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training (ACL 2023))β90Updated last year
- [ECCV 2022] FashionViL: Fashion-Focused V+L Representation Learningβ61Updated 2 years ago
- Use CLIP to represent video for Retrieval Taskβ69Updated 4 years ago
- A Unified Framework for Video-Language Understandingβ57Updated last year
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)β204Updated 2 years ago
- 𦩠Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)β64Updated last year
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuningβ136Updated last year
- [SIGIR 2022] CenterCLIP: Token Clustering for Efficient Text-Video Retrieval. Also, a text-video retrieval toolbox based on CLIP + fast pβ¦β130Updated 2 years ago
- (ACL'2023) MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioningβ35Updated 7 months ago
- [ICLR2024] Codes and Models for COSA: Concatenated Sample Pretrained Vision-Language Foundation Modelβ43Updated 2 months ago
- β26Updated 3 years ago
- [ECCV2022] Contrastive Vision-Language Pre-training with Limited Resourcesβ44Updated 2 years ago
- [BMVC22] Official Implementation of ViCHA: "Efficient Vision-Language Pretraining with Visual Concepts and Hierarchical Alignment"β54Updated 2 years ago
- VideoCC is a dataset containing (video-URL, caption) pairs for training video-text machine learning models. It is created using an automaβ¦β77Updated 2 years ago
- Code for the Video Similarity Challenge.β77Updated last year
- [ICLR2024] The official implementation of paper "UniAdapter: Unified Parameter-Efficient Transfer Learning for Cross-modal Modeling", by β¦β73Updated last year
- β108Updated 2 years ago
- PyTorch code for βTVLT: Textless Vision-Language Transformerβ (NeurIPS 2022 Oral)β123Updated 2 years ago
- π Official pytorch implementation of paper "ProtoCLIP: Prototypical Contrastive Language Image Pretraining" (IEEE TNNLS)β52Updated last year
- [CVPR 2023 (Highlight)] FAME-ViL: Multi-Tasking V+L Model for Heterogeneous Fashion Tasksβ53Updated last year