sterzhang / PVITLinks
Official Repository of Personalized Visual Instruct Tuning
☆32Updated 5 months ago
Alternatives and similar repositories for PVIT
Users that are interested in PVIT are comparing it to the libraries listed below
Sorting:
- (ICLR 2025 Spotlight) Official code repository for Interleaved Scene Graph.☆27Updated this week
- Implementation and dataset for paper "Can MLLMs Perform Text-to-Image In-Context Learning?"☆40Updated 2 months ago
- Official implement of MIA-DPO☆63Updated 6 months ago
- Code for "CAFe: Unifying Representation and Generation with Contrastive-Autoregressive Finetuning"☆21Updated 4 months ago
- [ICLR 2025] SAFREE: Training-Free and Adaptive Guard for Safe Text-to-Image and Video Generation☆43Updated 6 months ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆85Updated 10 months ago
- [ECCV 2024] Learning Video Context as Interleaved Multimodal Sequences☆40Updated 4 months ago
- ☆11Updated 10 months ago
- The code repository of UniRL☆36Updated 2 months ago
- Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆56Updated 2 years ago
- ☆67Updated last month
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆36Updated 4 months ago
- VPEval Codebase from Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆45Updated last year
- [NeurIPS 2024] Official PyTorch implementation of LoTLIP: Improving Language-Image Pre-training for Long Text Understanding☆43Updated 6 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆77Updated last year
- [ICCV 2025] Official code for "AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning"☆34Updated last month
- Code for "AVG-LLaVA: A Multimodal Large Model with Adaptive Visual Granularity"☆30Updated 9 months ago
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆65Updated last month
- [ECCV 2024] API: Attention Prompting on Image for Large Vision-Language Models☆98Updated 9 months ago
- Codes for ICLR 2025 Paper: Towards Semantic Equivalence of Tokenization in Multimodal LLM☆70Updated 3 months ago
- [CVPR 2024] The official implementation of paper "synthesize, diagnose, and optimize: towards fine-grained vision-language understanding"☆45Updated last month
- Official code for paper "GRIT: Teaching MLLMs to Think with Images"☆115Updated this week
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"☆35Updated last year
- Official repository for LLaVA-Reward (ICCV 2025): Multimodal LLMs as Customized Reward Models for Text-to-Image Generation☆14Updated last week
- Rui Qian, Xin Yin, Dejing Dou†: Reasoning to Attend: Try to Understand How <SEG> Token Works (CVPR 2025)☆38Updated 3 months ago
- ☆35Updated 6 months ago
- Official Implementation for "Editing Massive Concepts in Text-to-Image Diffusion Models"☆19Updated last year
- official repo for paper "[CLS] Token Tells Everything Needed for Training-free Efficient MLLMs"☆22Updated 3 months ago
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"☆41Updated 8 months ago
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆63Updated 3 weeks ago