jie040109 / MLAE
☆21Updated 5 months ago
Related projects ⓘ
Alternatives and complementary repositories for MLAE
- CLIP-MoE: Mixture of Experts for CLIP☆17Updated last month
- This repository contains the code for SFT, RLHF, and DPO, designed for vision-based LLMs, including the LLaVA models and the LLaMA-3.2-vi…☆81Updated 3 weeks ago
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆14Updated 5 months ago
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆64Updated last week
- ☆74Updated 4 months ago
- ☆53Updated 3 months ago
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆54Updated 3 months ago
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆71Updated this week
- [ECCV 2024] API: Attention Prompting on Image for Large Vision-Language Models☆45Updated last month
- Implementation of "VL-Mamba: Exploring State Space Models for Multimodal Learning"☆78Updated 7 months ago
- ☆21Updated 3 months ago
- [Preprint] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆49Updated 2 months ago
- [NeurIPS2023] Parameter-efficient Tuning of Large-scale Multimodal Foundation Model☆83Updated 11 months ago
- ☆27Updated 9 months ago
- ☆23Updated 6 months ago
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆72Updated 7 months ago
- This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strat…☆71Updated 7 months ago
- LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters☆20Updated last week
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆41Updated 3 months ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆89Updated last month
- The official code of the paper "PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction".☆42Updated last week
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆29Updated 7 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆30Updated 3 weeks ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆27Updated 4 months ago
- [ICML 2024] Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning☆43Updated 6 months ago
- Awesome-Low-Rank-Adaptation☆34Updated last month
- ☆47Updated 3 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆42Updated 5 months ago
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆33Updated 11 months ago