AILab-CVC / VL-GPTLinks
VL-GPT: A Generative Pre-trained Transformer for Vision and Language Understanding and Generation
☆86Updated 10 months ago
Alternatives and similar repositories for VL-GPT
Users that are interested in VL-GPT are comparing it to the libraries listed below
Sorting:
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆146Updated 7 months ago
- [NeurIPS 2024] Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning☆69Updated 5 months ago
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆58Updated 2 years ago
- Official repo for StableLLAVA☆95Updated last year
- ☆133Updated last year
- [ICLR'25] Reconstructive Visual Instruction Tuning☆97Updated 3 months ago
- ☆118Updated last year
- ☆58Updated last year
- ☆30Updated 11 months ago
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback☆67Updated 10 months ago
- ☆91Updated last year
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆37Updated last year
- This repo contains the code for our paper Towards Open-Ended Visual Recognition with Large Language Model☆98Updated 11 months ago
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆82Updated last month
- [ECCV2024] Official code implementation of Merlin: Empowering Multimodal LLMs with Foresight Minds☆94Updated last year
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆143Updated 7 months ago
- [CVPR 2024] CapsFusion: Rethinking Image-Text Data at Scale☆211Updated last year
- ☆98Updated last year
- ☆66Updated 11 months ago
- The official GitHub page for ''What Makes for Good Visual Instructions? Synthesizing Complex Visual Reasoning Instructions for Visual Ins…☆19Updated last year
- ☆86Updated 2 weeks ago
- [ICLR 2025] IDA-VLM: Towards Movie Understanding via ID-Aware Large Vision-Language Model☆31Updated 7 months ago
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆60Updated 4 months ago
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs☆91Updated 5 months ago
- SVIT: Scaling up Visual Instruction Tuning☆163Updated last year
- A Comprehensive Benchmark and Toolkit for Evaluating Video-based Large Language Models!☆128Updated last year
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆113Updated last month
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆76Updated 4 months ago
- Repository of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models☆37Updated last year
- ☆44Updated last year