TencentARC / GVTLinks
Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".
☆58Updated 2 years ago
Alternatives and similar repositories for GVT
Users that are interested in GVT are comparing it to the libraries listed below
Sorting:
- ☆32Updated last year
- ☆133Updated 2 years ago
- ☆92Updated 2 years ago
- Repository of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models☆37Updated 2 years ago
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆77Updated last year
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆54Updated 10 months ago
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆62Updated last year
- The official GitHub page for ''What Makes for Good Visual Instructions? Synthesizing Complex Visual Reasoning Instructions for Visual Ins…☆19Updated 2 years ago
- Official repo for StableLLAVA☆95Updated 2 years ago
- ☆100Updated last year
- ☆155Updated last year
- SVIT: Scaling up Visual Instruction Tuning☆166Updated last year
- code for "Strengthening Multimodal Large Language Model with Bootstrapped Preference Optimization"☆60Updated last year
- Accelerating Vision-Language Pretraining with Free Language Modeling (CVPR 2023)☆32Updated 2 years ago
- ☆80Updated last year
- FreeVA: Offline MLLM as Training-Free Video Assistant☆68Updated last year
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆127Updated 9 months ago
- ☆73Updated last year
- Official code of *Towards Event-oriented Long Video Understanding*☆12Updated last year
- ☆101Updated 2 years ago
- [ECCV 2024] Learning Video Context as Interleaved Multimodal Sequences☆41Updated 10 months ago
- [ICCV2023] Official code for "VL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity Control"☆53Updated 2 years ago
- A collection of visual instruction tuning datasets.☆76Updated last year
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆85Updated 11 months ago
- VL-GPT: A Generative Pre-trained Transformer for Vision and Language Understanding and Generation☆86Updated last year
- Official repository for CoMM Dataset☆49Updated last year
- 🦩 Official repository of paper "Visual Instruction Tuning with Polite Flamingo" (AAAI-24 Oral)☆65Updated 2 years ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆42Updated last month
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆59Updated last year
- [NeurIPS2023] Official implementation of the paper "Large Language Models are Visual Reasoning Coordinators"☆105Updated 2 years ago