adxcreative / COPELinks
☆15Updated last year
Alternatives and similar repositories for COPE
Users that are interested in COPE are comparing it to the libraries listed below
Sorting:
- ☆29Updated 2 years ago
- Towards Efficient and Effective Text-to-Video Retrieval with Coarse-to-Fine Visual Representation Learning☆20Updated 10 months ago
- Evaluation code and datasets for the ACL 2024 paper, VISTA: Visualized Text Embedding for Universal Multi-Modal Retrieval. The original c…☆45Updated last year
- ☆16Updated last year
- Official Code for the ICCV23 Paper: "LexLIP: Lexicon-Bottlenecked Language-Image Pre-Training for Large-Scale Image-Text Sparse Retrieval…☆40Updated 2 years ago
- Research Code for Multimodal-Cognition Team in Ant Group☆169Updated 2 months ago
- Product1M☆90Updated 3 years ago
- [ACM MM 2024] Improving Composed Image Retrieval via Contrastive Learning with Scaling Positives and Negatives☆39Updated 3 months ago
- Source code for NoteLLM and NoteLLM-2☆130Updated 8 months ago
- Multi-domain Recommendation with Adapter Tuning☆33Updated last year
- The dataset for paper "Why Do We Click: Visual Impression-aware News Recommendation", ACM MM 2021☆15Updated 3 years ago
- LLaVE: Large Language and Vision Embedding Models with Hardness-Weighted Contrastive Learning☆73Updated 6 months ago
- [ICLR 2023] This is the code repo for our ICLR‘23 paper "Universal Vision-Language Dense Retrieval: Learning A Unified Representation Spa…☆53Updated last year
- [NeurIPS 2025 Spotlight] A Token is Worth over 1,000 Tokens: Efficient Knowledge Distillation through Low-Rank Clone.☆39Updated last month
- A collection of visual instruction tuning datasets.☆76Updated last year
- mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connections. (EMNLP 2022)☆97Updated 2 years ago
- The repository of paper Personalized Multimodal Response Generation with Large Language Models☆17Updated last year
- [NeurIPS2023] Parameter-efficient Tuning of Large-scale Multimodal Foundation Model☆88Updated 2 years ago
- [ACM MM 2025] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆96Updated last week
- Efficient Multimodal Foundation Model Adaptation for Recommendation☆45Updated 2 months ago
- Toward Universal Multimodal Embedding☆70Updated 4 months ago
- [ICCV 2023] ALIP: Adaptive Language-Image Pre-training with Synthetic Caption☆102Updated 2 years ago
- ☆41Updated 8 months ago
- ☆62Updated 6 months ago
- ☆87Updated last year
- [SIGIR 2024] - Simple but Effective Raw-Data Level Multimodal Fusion for Composed Image Retrieval☆43Updated last year
- ☆57Updated 9 months ago
- All-In-One VLM: Image + Video + Transfer to Other Languages / Domains (TPAMI 2023)☆166Updated last year
- ☆30Updated this week
- PMMRec: Multi-Modality is All You Need for Transferable Recommender Systems☆22Updated 2 years ago