YehLi / xmodaler
X-modaler is a versatile and high-performance codebase for cross-modal analytics(e.g., image captioning, video captioning, vision-language pre-training, visual question answering, visual commonsense reasoning, and cross-modal retrieval).
☆970Updated 2 years ago
Alternatives and similar repositories for xmodaler:
Users that are interested in xmodaler are comparing it to the libraries listed below
- This repository focus on Image Captioning & Video Captioning & Seq-to-Seq Learning & NLP☆413Updated 2 years ago
- Implementation of 'X-Linear Attention Networks for Image Captioning' [CVPR 2020]☆273Updated 3 years ago
- [CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning…☆718Updated last year
- An official implementation for " UniVL: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation"☆349Updated 8 months ago
- METER: A Multimodal End-to-end TransformER Framework☆368Updated 2 years ago
- ☆99Updated 3 years ago
- Research code for CVPR 2022 paper "SwinBERT: End-to-End Transformers with Sparse Attention for Video Captioning"☆238Updated 2 years ago
- Oscar and VinVL☆1,047Updated last year
- A curated list of deep learning resources for video-text retrieval.☆613Updated last year
- VideoX: a collection of video cross-modal models☆1,012Updated 10 months ago
- A general video understanding codebase from SenseTime X-Lab☆445Updated 4 years ago
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"☆928Updated 11 months ago
- Research code for EMNLP 2020 paper "HERO: Hierarchical Encoder for Video+Language Omni-representation Pre-training"☆231Updated 3 years ago
- Meshed-Memory Transformer for Image Captioning. CVPR 2020☆531Updated 2 years ago
- Multi-Modal learning toolkit based on PaddlePaddle and PyTorch, supporting multiple applications such as multi-modal classification, cros…☆470Updated last year
- awesome grounding: A curated list of research papers in visual grounding☆1,066Updated last year
- The official source code for the paper Consensus-Aware Visual-Semantic Embedding for Image-Text Matching (ECCV 2020)☆165Updated 3 years ago
- The Paper List of Large Multi-Modality Model (Perception, Generation, Unification), Parameter-Efficient Finetuning, Vision-Language Pretr…☆424Updated 3 months ago
- A PyTorch reimplementation of bottom-up-attention models☆298Updated 2 years ago
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆410Updated 2 years ago
- PyTorch Code for the paper "VSE++: Improving Visual-Semantic Embeddings with Hard Negatives"☆503Updated 3 years ago
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)☆475Updated 2 years ago
- Code accompanying the paper "Fine-grained Video-Text Retrieval with Hierarchical Graph Reasoning".☆211Updated 4 years ago
- Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"☆1,447Updated last year
- Implementation of our CVPR2020 paper, Graph Structured Network for Image-Text Matching☆167Updated 4 years ago
- PyTorch code for ICCV'19 paper "Visual Semantic Reasoning for Image-Text Matching"☆297Updated 5 years ago
- Video embeddings for retrieval with natural language queries☆339Updated 2 years ago
- Multi-Modal Transformer for Video Retrieval☆258Updated 5 months ago
- Code for ICLR 2020 paper "VL-BERT: Pre-training of Generic Visual-Linguistic Representations".☆740Updated last year
- Code accompanying the paper "Say As You Wish: Fine-grained Control of Image Caption Generation with Abstract Scene Graphs" (Chen et al., …☆198Updated 2 years ago