wangxiao5791509 / MultiModal_BigModels_SurveyLinks
[MIR-2023-Survey] A continuously updated paper list for multi-modal pre-trained big models
☆288Updated last month
Alternatives and similar repositories for MultiModal_BigModels_Survey
Users that are interested in MultiModal_BigModels_Survey are comparing it to the libraries listed below
Sorting:
- A Survey on multimodal learning research.☆330Updated 2 years ago
- Awesome_Multimodel is a curated GitHub repository that provides a comprehensive collection of resources for Multimodal Large Language Mod…☆341Updated 5 months ago
- Implementation of paper "Towards a Unified View of Parameter-Efficient Transfer Learning" (ICLR 2022)☆533Updated 3 years ago
- Recent Advances in Vision and Language Pre-training (VLP)☆294Updated 2 years ago
- code for TCL: Vision-Language Pre-Training with Triple Contrastive Learning, CVPR 2022☆265Updated 11 months ago
- Update 2020☆76Updated 3 years ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆354Updated last year
- All-In-One VLM: Image + Video + Transfer to Other Languages / Domains (TPAMI 2023)☆165Updated last year
- A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.☆406Updated 11 months ago
- mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connections. (EMNLP 2022)☆96Updated 2 years ago
- Paper List for In-context Learning 🌷☆184Updated last year
- [Paper][AAAI2024]Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-modal Structured Representations☆147Updated last year
- This is the first released survey paper on hallucinations of large vision-language models (LVLMs). To keep track of this field and contin…☆77Updated last year
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆317Updated last year
- [ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning☆286Updated last year
- Implementation of CVPR 2023 paper "Prompting Large Language Models with Answer Heuristics for Knowledge-based Visual Question Answering".☆277Updated 3 months ago
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆222Updated 3 weeks ago
- This is the official repository for Retrieval Augmented Visual Question Answering☆237Updated 8 months ago
- [CVPR 2024] LION: Empowering Multimodal Large Language Model with Dual-Level Visual Knowledge☆151Updated last week
- [CVPR 2024 Highlight] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allo…☆366Updated last year
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)☆484Updated 2 years ago
- Research Trends in LLM-guided Multimodal Learning.☆355Updated last year
- Efficient Multimodal Large Language Models: A Survey☆371Updated 4 months ago
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆287Updated last year
- PyTorch codes for "LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer Learning"☆238Updated 2 years ago
- mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video (ICML 2023)☆229Updated 2 years ago
- ☆350Updated last year
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆96Updated last year
- [ICLR 2024 (Spotlight)] "Frozen Transformers in Language Models are Effective Visual Encoder Layers"☆243Updated last year
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆348Updated 8 months ago