yuanc3 / DATELinks
Use 2 lines to empower absolute time awareness for Qwen2.5VL's MRoPE
☆27Updated 4 months ago
Alternatives and similar repositories for DATE
Users that are interested in DATE are comparing it to the libraries listed below
Sorting:
- ☆132Updated 10 months ago
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆175Updated last month
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoning☆114Updated last month
- 🔥Awesome Multimodal Large Language Models Paper List☆154Updated 10 months ago
- [CVPR 2025] RAP: Retrieval-Augmented Personalization☆79Updated 2 months ago
- Official implementation of paper ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video Understanding☆39Updated 10 months ago
- [NeurIPS 2025 Spotlight] Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuning☆78Updated 4 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆134Updated 6 months ago
- Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data Efficiency☆60Updated 8 months ago
- The Next Step Forward in Multimodal LLM Alignment☆197Updated 9 months ago
- [ICCV 2025] Official implementation of LLaVA-KD: A Framework of Distilling Multimodal Large Language Models☆124Updated 3 months ago
- [CVPRW 2025] UniToken is an auto-regressive generation model that combines discrete and continuous representations to process visual inpu…☆105Updated 9 months ago
- [ECCV 2024] ShareGPT4V: Improving Large Multi-modal Models with Better Captions☆248Updated last year
- ☆107Updated 5 months ago
- [CVPR 2025] LLaVA-ST: A Multimodal Large Language Model for Fine-Grained Spatial-Temporal Understanding☆81Updated 7 months ago
- Latest open-source "Thinking with images" (O3/O4-mini) papers, covering training-free, SFT-based, and RL-enhanced methods for "fine-grain…☆110Updated 5 months ago
- A Simple Framework of Small-scale LMMs for Video Understanding☆108Updated 7 months ago
- [CVPR2025] Number it: Temporal Grounding Videos like Flipping Manga☆144Updated 3 weeks ago
- [NIPS 2025 DB Oral] Official Repository of paper: Envisioning Beyond the Pixels: Benchmarking Reasoning-Informed Visual Editing☆140Updated last week
- [CVPR 2025] Online Video Understanding: OVBench and VideoChat-Online☆88Updated 4 months ago
- 🔥🔥🔥 Latest Papers, Codes and Datasets on Video-LMM Post-Training☆241Updated 2 months ago
- [ICCV25 Highlight] The official implementation of the paper "LEGION: Learning to Ground and Explain for Synthetic Image Detection"☆74Updated 3 months ago
- ☆84Updated 9 months ago
- [ICLR 2025] TRACE: Temporal Grounding Video LLM via Casual Event Modeling☆143Updated 5 months ago
- Official code for NeurIPS 2025 paper "GRIT: Teaching MLLMs to Think with Images"☆173Updated 3 weeks ago
- The official code of "Thinking With Videos: Multimodal Tool-Augmented Reinforcement Learning for Long Video Reasoning"☆80Updated 3 months ago
- ✨✨ [ICLR 2026] MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Models☆43Updated 10 months ago
- TokLIP: Marry Visual Tokens to CLIP for Multimodal Comprehension and Generation☆236Updated 5 months ago
- [NIPS2025] VideoChat-R1 & R1.5: Enhancing Spatio-Temporal Perception and Reasoning via Reinforcement Fine-Tuning☆256Updated 3 months ago
- What Is a Good Caption? A Comprehensive Visual Caption Benchmark for Evaluating Both Correctness and Thoroughness☆26Updated 8 months ago