YehLi / xmodaler
X-modaler is a versatile and high-performance codebase for cross-modal analytics(e.g., image captioning, video captioning, vision-language pre-training, visual question answering, visual commonsense reasoning, and cross-modal retrieval).
☆1,031Updated last year
Alternatives and similar repositories for xmodaler:
Users that are interested in xmodaler are comparing it to the libraries listed below
- Multi-Modal learning toolkit based on PaddlePaddle and PyTorch, supporting multiple applications such as multi-modal classification, cros…☆564Updated last year
- Implementation of 'X-Linear Attention Networks for Image Captioning' [CVPR 2020]☆274Updated 3 years ago
- A curated list of deep learning resources for video-text retrieval.☆595Updated last year
- VideoX: a collection of video cross-modal models☆988Updated 6 months ago
- [CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning…☆709Updated last year
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)☆452Updated 2 years ago
- This repository focus on Image Captioning & Video Captioning & Seq-to-Seq Learning & NLP☆415Updated 2 years ago
- ☆96Updated 3 years ago
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"☆888Updated 7 months ago
- Meshed-Memory Transformer for Image Captioning. CVPR 2020☆520Updated last year
- 【CVPR'2023 Highlight & TPAMI】Cap4Video: What Can Auxiliary Captions Do for Text-Video Retrieval?☆242Updated this week
- The Paper List of Large Multi-Modality Model, Parameter-Efficient Finetuning, Vision-Language Pretraining, Conventional Image-Text Matchi…☆403Updated 4 months ago
- The official source code for the paper Consensus-Aware Visual-Semantic Embedding for Image-Text Matching (ECCV 2020)☆172Updated 2 years ago
- METER: A Multimodal End-to-end TransformER Framework☆362Updated 2 years ago
- Code accompanying the paper "Fine-grained Video-Text Retrieval with Hierarchical Graph Reasoning".☆209Updated 4 years ago
- Recent Advances in Vision and Language PreTrained Models (VL-PTMs)☆1,143Updated 2 years ago
- An official implementation for " UniVL: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation"☆339Updated 4 months ago
- Research code for CVPR 2022 paper "SwinBERT: End-to-End Transformers with Sparse Attention for Video Captioning"☆238Updated 2 years ago
- A curated (most recent) list of resources for Learning with Noisy Labels☆691Updated last month
- A general video understanding codebase from SenseTime X-Lab☆472Updated 3 years ago
- code for TCL: Vision-Language Pre-Training with Triple Contrastive Learning, CVPR 2022☆260Updated 2 months ago
- A PyTorch reimplementation of bottom-up-attention models☆294Updated 2 years ago
- awesome grounding: A curated list of research papers in visual grounding☆1,033Updated last year
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆403Updated 2 years ago
- ☆232Updated last year
- A curated list of Visual Question Answering(VQA)(Image/Video Question Answering),Visual Question Generation ,Visual Dialog ,Visual Common…☆661Updated last year
- Recent Advances in Vision and Language Pre-training (VLP)☆289Updated last year
- Grid features pre-training code for visual question answering☆268Updated 3 years ago
- Video embeddings for retrieval with natural language queries☆336Updated last year
- Unofficial pytorch implementation for Self-critical Sequence Training for Image Captioning. and others.☆998Updated last year