phellonchen / awesome-Vision-and-Language-Pre-training
Recent Advances in Vision and Language Pre-training (VLP)
☆292Updated last year
Alternatives and similar repositories for awesome-Vision-and-Language-Pre-training:
Users that are interested in awesome-Vision-and-Language-Pre-training are comparing it to the libraries listed below
- A Survey on multimodal learning research.☆321Updated last year
- METER: A Multimodal End-to-end TransformER Framework☆366Updated 2 years ago
- code for TCL: Vision-Language Pre-Training with Triple Contrastive Learning, CVPR 2022☆260Updated 5 months ago
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)☆468Updated 2 years ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆647Updated 2 years ago
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆410Updated 2 years ago
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆204Updated 2 years ago
- All-In-One VLM: Image + Video + Transfer to Other Languages / Domains (TPAMI 2023)☆153Updated 6 months ago
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆267Updated last year
- MixGen: A New Multi-Modal Data Augmentation☆121Updated 2 years ago
- [MIR-2023-Survey] A continuously updated paper list for multi-modal pre-trained big models☆286Updated 2 weeks ago
- Align and Prompt: Video-and-Language Pre-training with Entity Prompts☆186Updated 2 years ago
- PyTorch code for "Unifying Vision-and-Language Tasks via Text Generation" (ICML 2021)☆368Updated last year
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆135Updated last year
- [NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answering☆186Updated last year
- PyTorch codes for "LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer Learning"☆234Updated 2 years ago
- project page for VinVL☆351Updated last year
- [CVPR2023] All in One: Exploring Unified Video-Language Pre-training☆280Updated last year
- A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.☆397Updated 5 months ago
- [CVPR'21 Oral] Seeing Out of tHe bOx: End-to-End Pre-training for Vision-Language Representation Learning☆207Updated 2 years ago
- ☆310Updated last year
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆198Updated 11 months ago
- Coarse-to-Fine Vision-Language Pre-training with Fusion in the Backbone☆128Updated last year
- [NIPS2023] Code and Model for VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and Dataset☆269Updated 11 months ago
- ☆272Updated 2 years ago
- [ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning☆271Updated 11 months ago
- [TPAMI2024] Codes and Models for VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset☆277Updated 2 months ago
- Update 2020☆75Updated 2 years ago
- ☆503Updated 2 years ago
- Research Trends in LLM-guided Multimodal Learning.☆357Updated last year