TIGER-AI-Lab / VISTA
This repo contains code for "VISTA: Enhancing Long-Duration and High-Resolution Video Understanding by VIdeo SpatioTemporal Augmentation"
☆10Updated this week
Alternatives and similar repositories for VISTA:
Users that are interested in VISTA are comparing it to the libraries listed below
- The released data for paper "Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models".☆32Updated last year
- A huge dataset for Document Visual Question Answering☆14Updated 5 months ago
- Official code of *Towards Event-oriented Long Video Understanding*☆12Updated 5 months ago
- ☆26Updated 5 months ago
- Code for paper: Unified Text-to-Image Generation and Retrieval☆13Updated 6 months ago
- ☆19Updated last year
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆43Updated 6 months ago
- ☆15Updated 5 months ago
- ☆47Updated last year
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆35Updated 6 months ago
- This is the official repo for ByteVideoLLM/Dynamic-VLM☆18Updated 3 weeks ago
- [2024-ACL]: TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wildrounded Conversation☆47Updated last year
- ☆19Updated 2 months ago
- ☆29Updated last week
- This repository contains the code and data for the paper "VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception o…☆16Updated last month
- [ICCV23] Official implementation of eP-ALM: Efficient Perceptual Augmentation of Language Models.☆27Updated last year
- [EMNLP 2024] Official code for "Beyond Embeddings: The Promise of Visual Table in Multi-Modal Models"☆14Updated 2 months ago
- MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at Scale☆25Updated last month
- ☆36Updated 2 months ago
- Mixture of Attention Heads☆41Updated 2 years ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆31Updated this week
- ☆47Updated this week
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆56Updated last year
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆67Updated last month
- Official repo of the paper "MMWorld: Towards Multi-discipline Multi-faceted World Model Evaluation in Videos"☆23Updated 3 months ago
- We introduce new approach, Token Reduction using CLIP Metric (TRIM), aimed at improving the efficiency of MLLMs without sacrificing their…☆11Updated last month
- Official implementation of our paper "Finetuned Multimodal Language Models are High-Quality Image-Text Data Filters".☆43Updated last week
- Official code for "pi-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation", ICML 2023.☆32Updated last year
- ☆24Updated last year
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆32Updated 2 months ago