AILab-CVC / VL-GPTLinks
VL-GPT: A Generative Pre-trained Transformer for Vision and Language Understanding and Generation
☆86Updated 11 months ago
Alternatives and similar repositories for VL-GPT
Users that are interested in VL-GPT are comparing it to the libraries listed below
Sorting:
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆151Updated 8 months ago
- [NeurIPS 2024] Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning☆70Updated 6 months ago
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆39Updated last year
- [ICLR'25] Reconstructive Visual Instruction Tuning☆103Updated 4 months ago
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆58Updated 2 years ago
- ☆133Updated last year
- ☆58Updated 2 years ago
- ☆69Updated last year
- ☆118Updated last year
- A Comprehensive Benchmark and Toolkit for Evaluating Video-based Large Language Models!☆131Updated last year
- Official repo for StableLLAVA☆95Updated last year
- ☆91Updated last year
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆146Updated 9 months ago
- [CVPR 2024] CapsFusion: Rethinking Image-Text Data at Scale☆212Updated last year
- ☆119Updated last year
- ☆45Updated 7 months ago
- ☆31Updated last year
- [CVPR 2024] Prompt Highlighter: Interactive Control for Multi-Modal LLMs☆151Updated last year
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆85Updated 2 months ago
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆62Updated 6 months ago
- [ECCV2024] Official code implementation of Merlin: Empowering Multimodal LLMs with Foresight Minds☆94Updated last year
- https://huggingface.co/datasets/multimodal-reasoning-lab/Zebra-CoT☆67Updated 2 weeks ago
- ☆99Updated last year
- This repo contains the code for our paper Towards Open-Ended Visual Recognition with Large Language Model☆98Updated last year
- [ICLR 2025] IDA-VLM: Towards Movie Understanding via ID-Aware Large Vision-Language Model☆33Updated 8 months ago
- Implementation for "The Scalability of Simplicity: Empirical Analysis of Vision-Language Learning with a Single Transformer"☆61Updated last month
- Official implementation of the Law of Vision Representation in MLLMs☆163Updated 9 months ago
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆122Updated 2 months ago
- SVIT: Scaling up Visual Instruction Tuning☆164Updated last year
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback☆72Updated 11 months ago