BAAI-WuDao / BriVLLinks
Bridging Vision and Language Model
☆284Updated 2 years ago
Alternatives and similar repositories for BriVL
Users that are interested in BriVL are comparing it to the libraries listed below
Sorting:
- ☆167Updated last year
- Enriching MS-COCO with Chinese sentences and tags for cross-lingual multimedia tasks☆206Updated 7 months ago
- Bling's Object detection tool☆56Updated 2 years ago
- Chinese version of CLIP which achieves Chinese cross-modal retrieval and representation generation.☆169Updated 2 years ago
- TaiSu(太素)--a large-scale Chinese multimodal dataset(亿级大规模中文视觉语言预训练数据集)☆191Updated last year
- The Document of WenLan API, which was used to obtain image and text feature.☆39Updated 2 years ago
- ☆66Updated last year
- ☆60Updated 2 years ago
- Product1M☆87Updated 2 years ago
- ☆251Updated 2 years ago
- WuDaoMM this is a data project☆74Updated 3 years ago
- Cross-lingual image captioning☆89Updated 3 years ago
- Search photos on Unsplash based on OpenAI's CLIP model, support search with joint image+text queries and attention visualization.☆223Updated 4 years ago
- transformers结构的中文OFA模型☆137Updated 2 years ago
- ☆32Updated 3 years ago
- Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training (ACL 2023))☆91Updated 2 years ago
- ☆69Updated 3 months ago
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)☆484Updated 2 years ago
- ☆19Updated 3 years ago
- Research code for EMNLP 2020 paper "HERO: Hierarchical Encoder for Video+Language Omni-representation Pre-training"☆235Updated 4 years ago
- Youku-mPLUG: A 10 Million Large-scale Chinese Video-Language Pre-training Dataset and Benchmarks☆301Updated last year
- An official implementation for " UniVL: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation"☆361Updated last year
- METER: A Multimodal End-to-end TransformER Framework☆373Updated 2 years ago
- 基于ClipCap的看图说话Image Caption模型☆312Updated 3 years ago
- the world's first large-scale multi-modal short-video encyclopedia, where the primitive units are items, aspects, and short videos.☆64Updated last year
- Using VideoBERT to tackle video prediction☆132Updated 4 years ago
- [CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning…☆722Updated 2 years ago
- UNIMO: Towards Unified-Modal Understanding and Generation via Cross-Modal Contrastive Learning☆70Updated 4 years ago
- code for TCL: Vision-Language Pre-Training with Triple Contrastive Learning, CVPR 2022☆265Updated 11 months ago
- [AAAI2021] The code of “Similarity Reasoning and Filtration for Image-Text Matching”☆217Updated last year