SuperBruceJia / Awesome-Large-Vision-Language-ModelLinks
Awesome Large Vision-Language Model: A Curated List of Large Vision-Language Model
☆27Updated 9 months ago
Alternatives and similar repositories for Awesome-Large-Vision-Language-Model
Users that are interested in Awesome-Large-Vision-Language-Model are comparing it to the libraries listed below
Sorting:
- visual question answering prompting recipes for large vision-language models☆26Updated 10 months ago
- LibMoE: A LIBRARY FOR COMPREHENSIVE BENCHMARKING MIXTURE OF EXPERTS IN LARGE LANGUAGE MODELS☆40Updated last month
- codes for Efficient Test-Time Scaling via Self-Calibration☆14Updated 4 months ago
- Official repo of M$^2$PT: Multimodal Prompt Tuning for Zero-shot Instruction Learning☆24Updated 3 months ago
- Parameter-Efficient Fine-Tuning of State Space Models (ICML 2025)☆17Updated last month
- The official PyTorch implementation of the paper "MLAE: Masked LoRA Experts for Visual Parameter-Efficient Fine-Tuning"☆28Updated 7 months ago
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆68Updated 2 months ago
- [CVPR 2025] An Implementation of the paper "Pre-Instruction Data Selection for Visual Instruction Tuning"☆12Updated last month
- This repo contains the source code for VB-LoRA: Extreme Parameter Efficient Fine-Tuning with Vector Banks (NeurIPS 2024).☆39Updated 9 months ago
- ☆10Updated 6 months ago
- Awesome Low-Rank Adaptation☆39Updated last month
- [CVPR 2025] Official PyTorch Code for "MMRL: Multi-Modal Representation Learning for Vision-Language Models" and its extension "MMRL++: P…☆57Updated 3 weeks ago
- Official Implementation of DiffCLIP: Differential Attention Meets CLIP☆36Updated 4 months ago
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆65Updated last month
- [ACL 2023] PuMer: Pruning and Merging Tokens for Efficient Vision Language Models☆32Updated 9 months ago
- ☆17Updated 9 months ago
- [CVPR2025] Synthetic Data is an Elegant GIFT for Continual Vision-Language Models☆16Updated 2 weeks ago
- Parameter-Efficient Fine-Tuning for Foundation Models☆75Updated 3 months ago
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆57Updated 7 months ago
- Recent Advances on MLLM's Reasoning Ability☆24Updated 3 months ago
- Code for our Paper "All in an Aggregated Image for In-Image Learning"☆30Updated last year
- Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"☆56Updated 10 months ago
- 🔥MixPro: Data Augmentation with MaskMix and Progressive Attention Labeling for Vision Transformer [Official, ICLR 2023]☆21Updated last year
- Evaluating Deep Multimodal Reasoning in Vision-Centric Agentic Tasks☆14Updated last month
- ☆9Updated 2 years ago
- ☆24Updated last week
- Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuning☆194Updated last year
- Source code of paper: A Stronger Mixture of Low-Rank Experts for Fine-Tuning Foundation Models. (ICML 2025)☆27Updated 3 months ago
- Build a daily academic subscription pipeline! Get daily Arxiv papers and corresponding chatGPT summaries with pre-defined keywords. It is…☆39Updated 2 years ago
- AdaMoLE: Adaptive Mixture of LoRA Experts☆33Updated 9 months ago