JiuTian-VL / LION-FSLinks
[CVPR 2025] LION-FS: Fast & Slow Video-Language Thinker as Online Video Assistant
☆21Updated 5 months ago
Alternatives and similar repositories for LION-FS
Users that are interested in LION-FS are comparing it to the libraries listed below
Sorting:
- Official repo for "Streaming Video Understanding and Multi-round Interaction with Memory-enhanced Knowledge" ICLR2025☆86Updated 8 months ago
- [CVPR'24 Highlight] The official code and data for paper "EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Lan…☆63Updated 8 months ago
- Official repository of DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models☆88Updated last year
- Official Implementation of ISR-DPO:Aligning Large Multimodal Models for Videos by Iterative Self-Retrospective DPO (AAAI'25)☆23Updated this week
- Official code for MotionBench (CVPR 2025)☆59Updated 8 months ago
- [CVPR 2024] Data and benchmark code for the EgoExoLearn dataset☆74Updated 3 months ago
- ☆39Updated 2 months ago
- TStar is a unified temporal search framework for long-form video question answering☆71Updated 2 months ago
- [CVPR 2025] Official PyTorch Implementation of GLUS: Global-Local Reasoning Unified into A Single Large Language Model for Video Segmenta…☆62Updated 5 months ago
- [CVPR 2025] OVO-Bench: How Far is Your Video-LLMs from Real-World Online Video Understanding?☆103Updated 4 months ago
- ☆100Updated 3 weeks ago
- Ego-R1: Chain-of-Tool-Thought for Ultra-Long Egocentric Video Reasoning☆130Updated 3 months ago
- [CVPR'25] 🌟🌟 EgoTextVQA: Towards Egocentric Scene-Text Aware Video Question Answering☆41Updated 5 months ago
- SpaceR: The first MLLM empowered by SG-RLVR for video spatial reasoning☆98Updated 4 months ago
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback☆76Updated last year
- Egocentric Video Understanding Dataset (EVUD)☆32Updated last year
- [ECCV2024, Oral, Best Paper Finalist] This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generation…☆39Updated 9 months ago
- ☆60Updated 3 weeks ago
- [ECCV 2024] OpenPSG: Open-set Panoptic Scene Graph Generation via Large Multimodal Models☆49Updated 10 months ago
- Can 3D Vision-Language Models Truly Understand Natural Language?☆20Updated last year
- ☆26Updated 7 months ago
- [Nips 2025] EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆122Updated 4 months ago
- Official PyTorch Code of ReKV (ICLR'25)☆69Updated 3 weeks ago
- [ICCV'25] Ross3D: Reconstructive Visual Instruction Tuning with 3D-Awareness☆62Updated 4 months ago
- [ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models …☆78Updated 5 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆128Updated 4 months ago
- STI-Bench : Are MLLMs Ready for Precise Spatial-Temporal World Understanding?☆33Updated 4 months ago
- Video-Panda: Parameter-efficient Alignment for Encoder-free Video-Language Models [CVPR 2025]☆75Updated 5 months ago
- TemporalBench: Benchmarking Fine-grained Temporal Understanding for Multimodal Video Models☆37Updated last year
- OmniSpatial: Towards Comprehensive Spatial Reasoning Benchmark for Vision Language Models☆72Updated 2 months ago