cg1177 / VideoLLM
VideoLLM: Modeling Video Sequence with Large Language Models
☆154Updated last year
Related projects: ⓘ
- (2024CVPR) MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding☆211Updated 2 months ago
- [ECCV2024] Official code implementation of Merlin: Empowering Multimodal LLMs with Foresight Minds☆80Updated 2 months ago
- ☆70Updated 4 months ago
- [NeurIPS 2023 D&B] VidChapters-7M: Video Chapters at Scale☆171Updated 10 months ago
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆235Updated 8 months ago
- [NeurIPS2023] Official implementation of the paper "Large Language Models are Visual Reasoning Coordinators"☆99Updated 10 months ago
- Official implementation for "A Simple LLM Framework for Long-Range Video Question-Answering"☆81Updated 6 months ago
- Official implementation of the paper "Interfacing Foundation Models' Embeddings"☆107Updated last month
- Pytorch code for paper From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language Models☆181Updated 8 months ago
- [ICCV 2023] RLIPv2: Fast Scaling of Relational Language-Image Pre-training☆112Updated 3 months ago
- [COLM-2024] List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMs☆112Updated 3 weeks ago
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆252Updated 3 weeks ago
- Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"☆138Updated last week
- ☆131Updated 3 weeks ago
- Official Repository of paper VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding☆188Updated last month
- Multi-model video-to-text by combining embeddings from Flan-T5 + CLIP + Whisper + SceneGraph. The 'backbone LLM' is pre-trained from scra…☆49Updated last year
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆275Updated 2 months ago
- ☆101Updated 5 months ago
- LLaVA-HR: High-Resolution Large Language-Vision Assistant☆202Updated last month
- ☆199Updated 5 months ago
- ☆80Updated 4 months ago
- [CVPR'2024 Highlight] Official PyTorch implementation of the paper "VTimeLLM: Empower LLM to Grasp Video Moments".☆205Updated 3 months ago
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆128Updated last month
- Contextual Object Detection with Multimodal Large Language Models☆182Updated last year
- Official repo for StableLLAVA☆90Updated 8 months ago
- Code for paper "VideoTree: Adaptive Tree-based Video Representation for LLM Reasoning on Long Videos"☆68Updated last month
- [CVPR 2024 Champions] Solutions for EgoVis Chanllenges in CVPR 2024☆100Updated 2 months ago
- [CVPR 2024] Official PyTorch implementation of the paper "One For All: Video Conversation is Feasible Without Video Instruction Tuning"☆24Updated 7 months ago
- This is the official code of VideoAgent: A Memory-augmented Multimodal Agent for Video Understanding (ECCV 2024)☆101Updated last week
- [CVPR 2024] Prompt Highlighter: Interactive Control for Multi-Modal LLMs☆121Updated last month