☆193Oct 14, 2024Updated last year
Alternatives and similar repositories for LITA
Users that are interested in LITA are comparing it to the libraries listed below
Sorting:
- A Versatile Video-LLM for Long and Short Video Understanding with Superior Temporal Localization Ability☆106Nov 28, 2024Updated last year
- [CVPR'2024 Highlight] Official PyTorch implementation of the paper "VTimeLLM: Empower LLM to Grasp Video Moments".☆294Jun 13, 2024Updated last year
- Official implementation of HawkEye: Training Video-Text LLMs for Grounding Text in Videos☆46Apr 29, 2024Updated last year
- [AAAI 2025] VTG-LLM: Integrating Timestamp Knowledge into Video LLMs for Enhanced Video Temporal Grounding☆126Dec 10, 2024Updated last year
- [EMNLP 2025 Findings] Grounded-VideoLLM: Sharpening Fine-grained Temporal Grounding in Video Large Language Models☆138Aug 21, 2025Updated 6 months ago
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆129Apr 4, 2025Updated 11 months ago
- ☆80Nov 24, 2024Updated last year
- Long Context Transfer from Language to Vision☆402Mar 18, 2025Updated 11 months ago
- [ICLR2026] VideoChat-Flash: Hierarchical Compression for Long-Context Video Modeling☆511Nov 18, 2025Updated 3 months ago
- Unifying Specialized Visual Encoders for Video Language Models☆25Nov 22, 2025Updated 3 months ago
- ☆32Jul 29, 2024Updated last year
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding☆410May 8, 2025Updated 9 months ago
- (CVPR2024) Realigning Confidence with Temporal Saliency Information for Point-level Weakly-Supervised Temporal Action Localization☆20Jun 11, 2024Updated last year
- Official repository for the paper PLLaVA☆676Jul 28, 2024Updated last year
- The suite of modeling video with Mamba☆290May 14, 2024Updated last year
- [ICLR 2025] TRACE: Temporal Grounding Video LLM via Casual Event Modeling☆145Aug 22, 2025Updated 6 months ago
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)☆859Jul 29, 2024Updated last year
- Data release for Step Differences in Instructional Video (CVPR24)☆14Jun 19, 2024Updated last year
- [ECCV 2024🔥] Official implementation of the paper "ST-LLM: Large Language Models Are Effective Temporal Learners"☆151Sep 10, 2024Updated last year
- ☆138Sep 29, 2024Updated last year
- Code for CVPR25 paper "VideoTree: Adaptive Tree-based Video Representation for LLM Reasoning on Long Videos"☆154Jun 23, 2025Updated 8 months ago
- Are Binary Annotations Sufficient? Video Moment Retrieval via Hierarchical Uncertainty-based Active Learning☆15Dec 12, 2023Updated 2 years ago
- Egocentric Video Understanding Dataset (EVUD)☆33Jul 4, 2024Updated last year
- ☆41Sep 9, 2025Updated 5 months ago
- (2024CVPR) MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding☆347Jul 19, 2024Updated last year
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆262Aug 5, 2025Updated 7 months ago
- Can I Trust Your Answer? Visually Grounded Video Question Answering (CVPR'24, Highlight)☆83Jul 1, 2024Updated last year
- Official code for Goldfish model for long video understanding and MiniGPT4-video for short video understanding☆641Dec 10, 2024Updated last year
- ✨✨The Curse of Multi-Modalities (CMM): Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio☆52Jul 11, 2025Updated 7 months ago
- ☆4,577Sep 14, 2025Updated 5 months ago
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clou…☆3,766Nov 28, 2025Updated 3 months ago
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding☆2,204Dec 15, 2025Updated 2 months ago
- 👾 E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding (NeurIPS 2024)☆74Jan 20, 2025Updated last year
- [CVPR 2024] Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachers☆674Oct 25, 2024Updated last year
- [NeurIPS2023] Official implementation of the paper "Large Language Models are Visual Reasoning Coordinators"☆105Nov 9, 2023Updated 2 years ago
- [CVPR 2024] Context-Guided Spatio-Temporal Video Grounding☆66Jun 28, 2024Updated last year
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs☆1,278Jan 23, 2025Updated last year
- 🧠 VideoMind: A Chain-of-LoRA Agent for Temporal-Grounded Video Reasoning (ICLR 2026)☆305Feb 8, 2026Updated 3 weeks ago
- Official PyTorch code of GroundVQA (CVPR'24)☆64Sep 13, 2024Updated last year