HumanMLLM / HumanOmni
HumanOmni
β158Updated last month
Alternatives and similar repositories for HumanOmni:
Users that are interested in HumanOmni are comparing it to the libraries listed below
- π₯π₯First-ever hour scale video understanding modelsβ309Updated 2 weeks ago
- This is the official implementation of "Flash-VStream: Memory-Based Real-Time Understanding for Long Video Streams"β179Updated 4 months ago
- π‘ VideoMind: A Chain-of-LoRA Agent for Long Video Reasoningβ182Updated last week
- Repository for 23'MM accepted paper "Curriculum-Listener: Consistency- and Complementarity-Aware Audio-Enhanced Temporal Sentence Groundiβ¦β49Updated last year
- β186Updated 9 months ago
- β74Updated last month
- Long Context Transfer from Language to Visionβ374Updated last month
- Official implementation of paper AdaReTaKe: Adaptive Redundancy Reduction to Perceive Longer for Video-language Understandingβ51Updated 2 weeks ago
- β18Updated 3 months ago
- VideoChat-R1: Enhancing Spatio-Temporal Perception via Reinforcement Fine-Tuningβ115Updated 2 weeks ago
- LinVT: Empower Your Image-level Large Language Model to Understand Videosβ74Updated 4 months ago
- [ECCV 2024] ShareGPT4V: Improving Large Multi-modal Models with Better Captionsβ218Updated 10 months ago
- The Next Step Forward in Multimodal LLM Alignmentβ149Updated last week
- Official Repository of VideoLLaMB: Long Video Understanding with Recurrent Memory Bridgesβ67Updated 2 months ago
- This is the official implementation of our paper "Video-RAG: Visually-aligned Retrieval-Augmented Long Video Comprehension"β180Updated 2 months ago
- β150Updated 3 months ago
- [CVPR 2025]Dispider: Enabling Video LLMs with Active Real-Time Interaction via Disentangled Perception, Decision, and Reactionβ104Updated last month
- VideoChat-Flash: Hierarchical Compression for Long-Context Video Modelingβ402Updated this week
- β176Updated 10 months ago
- Explore the Limits of Omni-modal Pretraining at Scaleβ97Updated 8 months ago
- A collection of omni-mllmβ26Updated last week
- Research code for ACL2024 paper: "Synchronized Video Storytelling: Generating Video Narrations with Structured Storyline"β31Updated 4 months ago
- β¨First Open-Source R1-like Video-LLM [2025/02/18]β331Updated 2 months ago
- [CVPR'2024 Highlight] Official PyTorch implementation of the paper "VTimeLLM: Empower LLM to Grasp Video Moments".β272Updated 10 months ago
- Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Modelsβ52Updated last month
- Explainable Multimodal Emotion Reasoning (EMER), Open-vocabulary MER (OV-MER), and AffectGPTβ157Updated 2 weeks ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [π₯the first paper to explore R1 for video]β489Updated last week
- β173Updated 3 months ago
- SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Modelsβ217Updated 7 months ago
- β369Updated 2 months ago