YehLi / xmodalerLinks
X-modaler is a versatile and high-performance codebase for cross-modal analytics(e.g., image captioning, video captioning, vision-language pre-training, visual question answering, visual commonsense reasoning, and cross-modal retrieval).
☆969Updated 2 years ago
Alternatives and similar repositories for xmodaler
Users that are interested in xmodaler are comparing it to the libraries listed below
Sorting:
- Multi-Modal learning toolkit based on PaddlePaddle and PyTorch, supporting multiple applications such as multi-modal classification, cros…☆477Updated 2 years ago
- ☆101Updated 3 years ago
- The official source code for the paper Consensus-Aware Visual-Semantic Embedding for Image-Text Matching (ECCV 2020)☆167Updated 3 years ago
- [CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning…☆721Updated 2 years ago
- A general video understanding codebase from SenseTime X-Lab☆445Updated 4 years ago
- VideoX: a collection of video cross-modal models☆1,038Updated last year
- An official implementation for " UniVL: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation"☆359Updated last year
- This repository focus on Image Captioning & Video Captioning & Seq-to-Seq Learning & NLP☆412Updated 2 years ago
- A curated list of deep learning resources for video-text retrieval.☆627Updated last year
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"☆976Updated last year
- 【CVPR'2023 Highlight & TPAMI】Cap4Video: What Can Auxiliary Captions Do for Text-Video Retrieval?☆241Updated 8 months ago
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)☆482Updated 2 years ago
- [CVPR2021] The source code for our paper 《Removing the Background by Adding the Background: Towards Background Robust Self-supervised Vid…☆134Updated 4 years ago
- The Paper List of Large Multi-Modality Model (Perception, Generation, Unification), Parameter-Efficient Finetuning, Vision-Language Pretr…☆426Updated 7 months ago
- 【AAAI'2021】MVFNet: Multi-View Fusion Network for Efficient Video Recognition☆134Updated 3 years ago
- [ECCV 2022] & [IJCV 2024] Official implementation of the paper: Audio-Visual Segmentation (with Semantics)☆401Updated 8 months ago
- A curated list of Visual Question Answering(VQA)(Image/Video Question Answering),Visual Question Generation ,Visual Dialog ,Visual Common…☆665Updated 2 years ago
- Oscar and VinVL☆1,051Updated last year
- Implementation of 'X-Linear Attention Networks for Image Captioning' [CVPR 2020]☆274Updated 4 years ago
- METER: A Multimodal End-to-end TransformER Framework☆373Updated 2 years ago
- Large-Scale Visual Representation Model☆699Updated 2 months ago
- Video embeddings for retrieval with natural language queries☆341Updated 2 years ago
- Research code for CVPR 2022 paper "SwinBERT: End-to-End Transformers with Sparse Attention for Video Captioning"☆240Updated 3 years ago
- A PyTorch reimplementation of bottom-up-attention models☆302Updated 3 years ago
- [CVPR'23] Universal Instance Perception as Object Discovery and Retrieval☆1,274Updated 2 years ago
- [NeurIPS 2022 Spotlight] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training☆1,555Updated last year
- Code for ICLR 2020 paper "VL-BERT: Pre-training of Generic Visual-Linguistic Representations".☆744Updated 2 years ago
- A comprehensive list of Awesome Contrastive Learning Papers&Codes.Research include, but are not limited to: CV, NLP, Audio, Video, Multim…☆411Updated 3 years ago
- A lightweight, scalable, and general framework for visual question answering research☆325Updated 3 years ago
- A curated (most recent) list of resources for Learning with Noisy Labels☆702Updated 9 months ago