BAAI-WuDao / BriVLLinks
Bridging Vision and Language Model
☆285Updated 2 years ago
Alternatives and similar repositories for BriVL
Users that are interested in BriVL are comparing it to the libraries listed below
Sorting:
- Enriching MS-COCO with Chinese sentences and tags for cross-lingual multimedia tasks☆209Updated 11 months ago
- ☆168Updated 2 years ago
- Bling's Object detection tool☆56Updated 3 years ago
- Chinese version of CLIP which achieves Chinese cross-modal retrieval and representation generation.☆169Updated 3 years ago
- Cross-lingual image captioning☆91Updated 3 years ago
- ☆59Updated 3 years ago
- TaiSu(太素)--a large-scale Chinese multimodal dataset(亿级大规模中文视觉语言预训练数据集)☆189Updated 2 years ago
- Product1M☆90Updated 3 years ago
- The Document of WenLan API, which was used to obtain image and text feature.☆41Updated 3 years ago
- ☆65Updated 2 years ago
- WuDaoMM this is a data project☆74Updated 3 years ago
- Search photos on Unsplash based on OpenAI's CLIP model, support search with joint image+text queries and attention visualization.☆223Updated 4 years ago
- ☆258Updated 3 years ago
- ☆32Updated 3 years ago
- transformers结构的中文OFA模型☆137Updated 2 years ago
- An official implementation for " UniVL: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation"☆364Updated last year
- Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training (ACL 2023))☆91Updated 2 years ago
- ☆71Updated 7 months ago
- ☆18Updated 3 years ago
- Youku-mPLUG: A 10 Million Large-scale Chinese Video-Language Pre-training Dataset and Benchmarks☆301Updated 2 years ago
- the world's first large-scale multi-modal short-video encyclopedia, where the primitive units are items, aspects, and short videos.☆66Updated 2 years ago
- Research code for EMNLP 2020 paper "HERO: Hierarchical Encoder for Video+Language Omni-representation Pre-training"☆236Updated 4 years ago
- METER: A Multimodal End-to-end TransformER Framework☆375Updated 3 years ago
- 基于ClipCap的看图说话Image Caption模型☆320Updated 3 years ago
- [CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning…☆723Updated 2 years ago
- pytorch implementation of mvp: a multi-stage vision-language pre-training framework☆34Updated 2 years ago
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)☆488Updated 3 years ago
- Using VideoBERT to tackle video prediction☆133Updated 4 years ago
- 500,000 multimodal short video data and baseline models. 50万条多模态短视频数据集和基线模型(TensorFlow2.0)。☆135Updated 6 years ago
- UNIMO: Towards Unified-Modal Understanding and Generation via Cross-Modal Contrastive Learning☆70Updated 4 years ago