Hon-Wong / Elysium
[ECCV 2024] Elysium: Exploring Object-level Perception in Videos via MLLM
☆70Updated 5 months ago
Alternatives and similar repositories for Elysium:
Users that are interested in Elysium are comparing it to the libraries listed below
- Grounded-VideoLLM: Sharpening Fine-grained Temporal Grounding in Video Large Language Models☆91Updated this week
- [CVPR 2025] LLaVA-ST: A Multimodal Large Language Model for Fine-Grained Spatial-Temporal Understanding☆33Updated 3 weeks ago
- A Versatile Video-LLM for Long and Short Video Understanding with Superior Temporal Localization Ability☆90Updated 3 months ago
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆39Updated this week
- [ECCV 2024] ControlCap: Controllable Region-level Captioning☆73Updated 5 months ago
- [CVPR'25] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆63Updated 3 weeks ago
- [NeurlPS 2024] One Token to Seg Them All: Language Instructed Reasoning Segmentation in Videos☆109Updated 2 months ago
- [ECCV2024] Official code implementation of Merlin: Empowering Multimodal LLMs with Foresight Minds☆90Updated 8 months ago
- FreeVA: Offline MLLM as Training-Free Video Assistant☆57Updated 9 months ago
- [CVPR2025] Number it: Temporal Grounding Videos like Flipping Manga☆67Updated this week
- ☆69Updated 4 months ago
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆76Updated 7 months ago
- [CVPR 2024] Context-Guided Spatio-Temporal Video Grounding☆51Updated 8 months ago
- The official repository for paper "PruneVid: Visual Token Pruning for Efficient Video Large Language Models".☆34Updated last month
- Code for paper "VideoTree: Adaptive Tree-based Video Representation for LLM Reasoning on Long Videos"☆100Updated 3 weeks ago
- [ECCV2024] PartGLEE: A Foundation Model for Recognizing and Parsing Any Objects☆41Updated 6 months ago
- 👾 E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding (NeurIPS 2024)☆56Updated 2 months ago
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs☆90Updated 2 months ago
- Official PyTorch code of GroundVQA (CVPR'24)☆56Updated 6 months ago
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆67Updated last month
- ☆72Updated 3 months ago
- ☆111Updated 7 months ago
- [ICLR 2025] Reconstructive Visual Instruction Tuning☆73Updated 3 weeks ago
- This is the official repo for ByteVideoLLM/Dynamic-VLM☆20Updated 3 months ago
- 【NeurIPS 2024】Dense Connector for MLLMs☆157Updated 5 months ago
- ☆29Updated 5 months ago
- Evaluation code for Ref-L4, a new REC benchmark in the LMM era☆29Updated 2 months ago
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆107Updated last month
- ☆92Updated 7 months ago
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback☆63Updated 6 months ago