SuperBruceJia / Awesome-Large-Vision-Language-ModelLinks
Awesome Large Vision-Language Model: A Curated List of Large Vision-Language Model
☆39Updated 3 months ago
Alternatives and similar repositories for Awesome-Large-Vision-Language-Model
Users that are interested in Awesome-Large-Vision-Language-Model are comparing it to the libraries listed below
Sorting:
- Awesome Mixture of Experts (MoE): A Curated List of Mixture of Experts (MoE) and Mixture of Multimodal Experts (MoME)☆46Updated last month
- Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuning☆200Updated last year
- Reading list for Multimodal Large Language Models☆69Updated 2 years ago
- [NeurIPS 2024] A Novel Rank-Based Metric for Evaluating Large Language Models☆54Updated 5 months ago
- This project aims to collect and collate various datasets for multimodal large model training, including but not limited to pre-training …☆59Updated 6 months ago
- Residual Prompt Tuning: a method for faster and better prompt tuning.☆56Updated 2 years ago
- ☆70Updated 5 months ago
- ☆131Updated 8 months ago
- Official Implementation for EMNLP 2024 (main) "AgentReview: Exploring Academic Peer Review with LLM Agent."☆92Updated last year
- Parameter-Efficient Fine-Tuning for Foundation Models☆99Updated 7 months ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆72Updated last year
- This repository provides a comprehensive collection of research papers focused on multimodal representation learning, all of which have b…☆81Updated 5 months ago
- Automatically update arXiv papers about LLM Reasoning, LLM Evaluation, LLM & MLLM and Video Understanding using Github Actions.☆127Updated this week
- ☆95Updated last year
- [ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"☆98Updated last year
- ☆33Updated 10 months ago
- [NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs☆129Updated 6 months ago
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆109Updated last week
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆33Updated 2 years ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆135Updated 7 months ago
- A curated list of Model Merging methods.☆92Updated last year
- [NeurIPS2023] Parameter-efficient Tuning of Large-scale Multimodal Foundation Model☆88Updated last year
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆84Updated last year
- Enable Next-sentence Prediction for Large Language Models with Faster Speed, Higher Accuracy and Longer Context☆39Updated last year
- Offical Repository of "AtomThink: Multimodal Slow Thinking with Atomic Step Reasoning"☆57Updated 3 months ago
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆180Updated this week
- Survey of Small Language Models from Penn State, ...☆214Updated 2 weeks ago
- Must-read Papers on Large Language Model (LLM) Continual Learning☆148Updated 2 years ago
- PyTorch implementation of LIMoE☆52Updated last year
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆74Updated 6 months ago