The official repository of "Video assistant towards large language model makes everything easy"
☆232Dec 24, 2024Updated last year
Alternatives and similar repositories for Valley
Users that are interested in Valley are comparing it to the libraries listed below
Sorting:
- ☆32Jul 29, 2024Updated last year
- ☆80Nov 24, 2024Updated last year
- ☆155Oct 31, 2024Updated last year
- [ACL 2024 🔥] Video-ChatGPT is a video conversation model capable of generating meaningful conversation about videos. It combines the cap…☆1,492Aug 5, 2025Updated 6 months ago
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding☆409May 8, 2025Updated 9 months ago
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)☆859Jul 29, 2024Updated last year
- [EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding☆3,124Jun 4, 2024Updated last year
- A lightweight flexible Video-MLLM developed by TencentQQ Multimedia Research Team.☆74Oct 14, 2024Updated last year
- [CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.☆3,335Jan 18, 2025Updated last year
- SVIT: Scaling up Visual Instruction Tuning☆166Jun 20, 2024Updated last year
- A Versatile Video-LLM for Long and Short Video Understanding with Superior Temporal Localization Ability☆106Nov 28, 2024Updated last year
- [CVPR'2024 Highlight] Official PyTorch implementation of the paper "VTimeLLM: Empower LLM to Grasp Video Moments".☆294Jun 13, 2024Updated last year
- Official repository for the paper PLLaVA☆676Jul 28, 2024Updated last year
- 【EMNLP 2024🔥】Video-LLaVA: Learning United Visual Representation by Alignment Before Projection☆3,450Dec 3, 2024Updated last year
- Long Context Transfer from Language to Vision☆402Mar 18, 2025Updated 11 months ago
- FreeVA: Offline MLLM as Training-Free Video Assistant☆69Jun 9, 2024Updated last year
- A Comprehensive Benchmark and Toolkit for Evaluating Video-based Large Language Models!☆137Dec 31, 2023Updated 2 years ago
- A new multi-shot video understanding benchmark Shot2Story with comprehensive video summaries and detailed shot-level captions.☆168Jan 30, 2025Updated last year
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding☆686Jan 29, 2025Updated last year
- [ECCV 2024🔥] Official implementation of the paper "ST-LLM: Large Language Models Are Effective Temporal Learners"☆151Sep 10, 2024Updated last year
- mPLUG-Owl: The Powerful Multi-modal Large Language Model Family☆2,539Apr 2, 2025Updated 11 months ago
- [CVPR 2024 Highlight🔥] Chat-UniVi: Unified Visual Representation Empowers Large Language Models with Image and Video Understanding☆944Oct 16, 2024Updated last year
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆603Oct 6, 2024Updated last year
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆281Jun 25, 2024Updated last year
- ✨✨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis☆731Dec 8, 2025Updated 2 months ago
- FunQA benchmarks funny, creative, and magic videos for challenging tasks including timestamp localization, video description, reasoning, …☆104Dec 25, 2025Updated 2 months ago
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding☆2,201Dec 15, 2025Updated 2 months ago
- 【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment☆870Mar 25, 2024Updated last year
- [IJCV] EgoPlan-Bench: Benchmarking Multimodal Large Language Models for Human-Level Planning☆81Dec 6, 2024Updated last year
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆54Mar 9, 2025Updated 11 months ago
- official impelmentation of Kangaroo: A Powerful Video-Language Model Supporting Long-context Video Input☆67Aug 30, 2024Updated last year
- ☆4,577Sep 14, 2025Updated 5 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆360Jan 14, 2025Updated last year
- [AAAI 2025] VTG-LLM: Integrating Timestamp Knowledge into Video LLMs for Enhanced Video Temporal Grounding☆126Dec 10, 2024Updated last year
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆262Aug 5, 2025Updated 6 months ago
- 🔥🔥🔥 [IEEE TCSVT] Latest Papers, Codes and Datasets on Vid-LLMs.☆3,087Dec 20, 2025Updated 2 months ago
- [ACL 2024] PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain☆106Mar 14, 2024Updated last year
- ☆37Sep 16, 2024Updated last year
- This is the official implementation of ICCV 2025 "Flash-VStream: Efficient Real-Time Understanding for Long Video Streams"☆271Oct 15, 2025Updated 4 months ago