showlab / videollm-online
VideoLLM-online: Online Video Large Language Model for Streaming Video (CVPR 2024)
☆450Updated this week
Alternatives and similar repositories for videollm-online:
Users that are interested in videollm-online are comparing it to the libraries listed below
- [NeurIPS 2024] Official code for HourVideo: 1-Hour Video Language Understanding☆146Updated last month
- [NeurIPS 2024] An official implementation of ShareGPT4Video: Improving Video Understanding and Generation with Better Captions☆1,053Updated 6 months ago
- Official Implementation for "Lyra: An Efficient and Speech-Centric Framework for Omni-Cognition"☆284Updated 3 months ago
- [ECCV2024] Grounded Multimodal Large Language Model with Localized Visual Tokenization☆563Updated 10 months ago
- SAM2Long: Enhancing SAM 2 for Long Video Segmentation with a Training-Free Memory Tree☆464Updated 4 months ago
- Real-time and accurate open-vocabulary end-to-end object detection☆1,313Updated 4 months ago
- [ICLR 2025] MLLM for On-Demand Spatial-Temporal Understanding at Arbitrary Resolution☆302Updated 2 months ago
- An open-source implementation for training LLaVA-NeXT.☆392Updated 6 months ago
- Ola: Pushing the Frontiers of Omni-Modal Language Model☆334Updated last month
- Mulberry, an o1-like Reasoning and Reflection MLLM Implemented via Collective MCTS☆1,174Updated last month
- Eagle Family: Exploring Model Designs, Data Recipes and Training Strategies for Frontier-Class Multimodal LLMs☆744Updated this week
- 🔥 🔥 🔥 [NeurIPS 2024] Hawk: Learning to Understand Open-World Video Anomalies☆194Updated 2 weeks ago
- [ ICLR 2024 ] Official Codebase for "InstructCV: Instruction-Tuned Text-to-Image Diffusion Models as Vision Generalists"☆462Updated last year
- An official implementation of VideoRoPE: What Makes for Good Video Rotary Position Embedding?☆127Updated 3 weeks ago
- R1-VL: Learning to Reason with Multimodal Large Language Models via Step-wise Group Relative Policy Optimization☆260Updated last week
- OMG-LLaVA and OMG-Seg codebase [CVPR-24 and NeurIPS-24]☆1,278Updated 4 months ago
- [CVPR 2024] Aligning and Prompting Everything All at Once for Universal Visual Perception☆563Updated 11 months ago
- Accelerating the development of large multimodal models (LMMs) with one-click evaluation module - lmms-eval.☆2,390Updated this week
- (AAAI 2024) BLIVA: A Simple Multimodal LLM for Better Handling of Text-rich Visual Questions☆257Updated last year
- [NeurIPS 2024] Matryoshka Query Transformer for Large Vision-Language Models☆104Updated 9 months ago
- ✨✨Long-VITA: Scaling Large Multi-modal Models to 1 Million Tokens with Leading Short-Context Accuracy☆275Updated last month
- 🔥 Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos☆1,060Updated last week
- Video-R1: Reinforcing Video Reasoning in MLLMs [🔥the first paper to explore R1 for video]☆469Updated last week
- [CVPR 2025] The First Investigation of CoT Reasoning in Image Generation☆635Updated 3 weeks ago
- ☆331Updated last year
- [CVPR 2025, Rating:555] TSP3D: Text-guided Sparse Voxel Pruning for Efficient 3D Visual Grounding☆181Updated last month
- ☆294Updated last week
- The codes about "Uni-MoE: Scaling Unified Multimodal Models with Mixture of Experts"☆718Updated 2 weeks ago
- [CVPR 2025] The code for "VideoRefer Suite: Advancing Spatial-Temporal Object Understanding with Video LLM"☆190Updated 3 weeks ago
- This is the official code of VideoAgent: A Memory-augmented Multimodal Agent for Video Understanding (ECCV 2024)☆195Updated 4 months ago