Hon-Wong / ElysiumLinks
[ECCV 2024] Elysium: Exploring Object-level Perception in Videos via MLLM
☆79Updated 9 months ago
Alternatives and similar repositories for Elysium
Users that are interested in Elysium are comparing it to the libraries listed below
Sorting:
- A Versatile Video-LLM for Long and Short Video Understanding with Superior Temporal Localization Ability☆95Updated 8 months ago
- [ECCV 2024] ControlCap: Controllable Region-level Captioning☆78Updated 9 months ago
- [CVPR 2025 🔥]A Large Multimodal Model for Pixel-Level Visual Grounding in Videos☆75Updated 3 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆103Updated last week
- FreeVA: Offline MLLM as Training-Free Video Assistant☆60Updated last year
- Grounded-VideoLLM: Sharpening Fine-grained Temporal Grounding in Video Large Language Models☆118Updated 4 months ago
- Offical repo for CAT-V - Caption Anything in Video: Object-centric Dense Video Captioning with Spatiotemporal Multimodal Prompting☆46Updated 3 weeks ago
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆85Updated 3 months ago
- 👾 E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding (NeurIPS 2024)☆60Updated 6 months ago
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆165Updated 10 months ago
- The official repository for ACL2025 paper "PruneVid: Visual Token Pruning for Efficient Video Large Language Models".☆50Updated 2 months ago
- Official repository of "CoMP: Continual Multimodal Pre-training for Vision Foundation Models"☆30Updated 4 months ago
- ☆97Updated last year
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆84Updated last month
- Evaluation code for Ref-L4, a new REC benchmark in the LMM era☆39Updated 7 months ago
- [ECCV2024] Official code implementation of Merlin: Empowering Multimodal LLMs with Foresight Minds☆94Updated last year
- [NeurlPS 2024] One Token to Seg Them All: Language Instructed Reasoning Segmentation in Videos☆123Updated 7 months ago
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆67Updated 2 weeks ago
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆100Updated 2 months ago
- Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data Efficiency☆47Updated last month
- Official PyTorch code of GroundVQA (CVPR'24)☆61Updated 10 months ago
- [CVPR 2025] LLaVA-ST: A Multimodal Large Language Model for Fine-Grained Spatial-Temporal Understanding☆55Updated 3 weeks ago
- [CVPR 2025] DynRefer: Delving into Region-level Multimodal Tasks via Dynamic Resolution☆51Updated 5 months ago
- [ICLR 2025] TRACE: Temporal Grounding Video LLM via Casual Event Modeling☆110Updated 2 weeks ago
- ☆138Updated 10 months ago
- Code for CVPR25 paper "VideoTree: Adaptive Tree-based Video Representation for LLM Reasoning on Long Videos"☆131Updated last month
- Official code for paper "GRIT: Teaching MLLMs to Think with Images"☆114Updated this week
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs☆91Updated 6 months ago
- ☆105Updated 3 months ago
- [ECCV24] VISA: Reasoning Video Object Segmentation via Large Language Model☆182Updated 11 months ago