zhengrongz / AoTDLinks
[CVPR 2025] Official PyTorch code of "Enhancing Video-LLM Reasoning via Agent-of-Thoughts Distillation".
☆33Updated last month
Alternatives and similar repositories for AoTD
Users that are interested in AoTD are comparing it to the libraries listed below
Sorting:
- [AAAI 2025] Grounded Multi-Hop VideoQA in Long-Form Egocentric Videos☆24Updated 3 weeks ago
- Official PyTorch Code of ReKV (ICLR'25)☆28Updated 3 months ago
- Official PyTorch code of GroundVQA (CVPR'24)☆61Updated 9 months ago
- ☆24Updated 2 months ago
- Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data Efficiency☆42Updated 2 weeks ago
- Can I Trust Your Answer? Visually Grounded Video Question Answering (CVPR'24, Highlight)☆74Updated 11 months ago
- ☆31Updated 9 months ago
- [ICLR 2025] TimeSuite: Improving MLLMs for Long Video Understanding via Grounded Tuning☆33Updated 2 months ago
- Official implementation of HawkEye: Training Video-Text LLMs for Grounding Text in Videos☆43Updated last year
- [CVPR 2024] Context-Guided Spatio-Temporal Video Grounding☆55Updated 11 months ago
- TimeChat-online: 80% Visual Tokens are Naturally Redundant in Streaming Videos☆51Updated this week
- The official repository for ACL2025 paper "PruneVid: Visual Token Pruning for Efficient Video Large Language Models".☆46Updated last month
- ☆33Updated 5 months ago
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆73Updated 2 months ago
- (NeurIPS 2024 Spotlight) TOPA: Extend Large Language Models for Video Understanding via Text-Only Pre-Alignment☆31Updated 8 months ago
- ☆17Updated 3 weeks ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆87Updated 2 weeks ago
- ☆93Updated 5 months ago
- [ECCV 2024] EgoCVR: An Egocentric Benchmark for Fine-Grained Composed Video Retrieval☆38Updated 2 months ago
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"☆34Updated last year
- [LLaVA-Video-R1]✨First Adaptation of R1 to LLaVA-Video (2025-03-18)☆29Updated last month
- Latest Advances on (RL based) Multimodal Reasoning and Generation in Multimodal Large Language Models☆29Updated last week
- Official repository of DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models☆85Updated 9 months ago
- [NeurlPS 2024] One Token to Seg Them All: Language Instructed Reasoning Segmentation in Videos☆122Updated 5 months ago
- [CVPR 2025 🔥]A Large Multimodal Model for Pixel-Level Visual Grounding in Videos☆73Updated 2 months ago
- Code for CVPR25 paper "VideoTree: Adaptive Tree-based Video Representation for LLM Reasoning on Long Videos"☆119Updated 3 months ago
- [CVPR 2024] Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension☆54Updated last year
- LongVALE: Vision-Audio-Language-Event Benchmark Towards Time-Aware Omni-Modal Perception of Long Videos. (CVPR 2025))☆34Updated 2 weeks ago
- 👾 E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding (NeurIPS 2024)☆59Updated 5 months ago
- [CVPR2025] Number it: Temporal Grounding Videos like Flipping Manga☆92Updated 2 months ago