quangminhdinh / TrafficVLMLinks
[CVPRW 2024] TrafficVLM: A Controllable Visual Language Model for Traffic Video Captioning. Official code for the 3rd place solution of the AI City Challenge 2024 Track 2.
☆42Updated 6 months ago
Alternatives and similar repositories for TrafficVLM
Users that are interested in TrafficVLM are comparing it to the libraries listed below
Sorting:
- ☆41Updated 2 months ago
- ☆112Updated 4 months ago
- [ICCV2023] Tem-adapter: Adapting Image-Text Pretraining for Video Question Answer☆37Updated last year
- [CVPR2024 Highlight] The official repo for paper "Abductive Ego-View Accident Video Understanding for Safe Driving Perception"☆57Updated 4 months ago
- [CVPR 2024] Code for HiKER-SGG: Hierarchical Knowledge Enhanced Robust Scene Graph Generation☆72Updated 10 months ago
- Benchmarking Panoptic Video Scene Graph Generation (PVSG), CVPR'23☆96Updated last year
- [ECCV 2024] Elysium: Exploring Object-level Perception in Videos via MLLM☆81Updated 9 months ago
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆165Updated 10 months ago
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoning☆92Updated 3 months ago
- [ECCV 2024🔥] Official implementation of the paper "ST-LLM: Large Language Models Are Effective Temporal Learners"☆150Updated 11 months ago
- Grounded-VideoLLM: Sharpening Fine-grained Temporal Grounding in Video Large Language Models☆119Updated 4 months ago
- ☆99Updated last year
- The suite of modeling video with Mamba☆280Updated last year
- 🤖 [ICLR'25] Multimodal Video Understanding Framework (MVU)☆45Updated 6 months ago
- ☆86Updated last year
- (2024CVPR) MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding☆324Updated last year
- ☆51Updated last year
- 🚀【AAAI 2025】Cross-View Referring Multi-Object Tracking☆61Updated last month
- [CVPR 2024] Official PyTorch implementation of the paper "One For All: Video Conversation is Feasible Without Video Instruction Tuning"☆34Updated last year
- [AAAI 2025] AL-Ref-SAM 2: Unleashing the Temporal-Spatial Reasoning Capacity of GPT for Training-Free Audio and Language Referenced Video…☆85Updated 7 months ago
- Improving Mamaba performance on Video Understanding task☆39Updated 10 months ago
- Automatically update arXiv papers about SOT & VLT, Multi-modal Learning, LLM and Video Understanding using Github Actions.☆35Updated this week
- [NeurIPS'24 spotlight] MECD: Unlocking Multi-Event Causal Discovery in Video Reasoning☆37Updated last month
- [CVPR 2025] LLaVA-ST: A Multimodal Large Language Model for Fine-Grained Spatial-Temporal Understanding☆58Updated last month
- [ECCV2024] Official code implementation of Merlin: Empowering Multimodal LLMs with Foresight Minds☆94Updated last year
- A Versatile Video-LLM for Long and Short Video Understanding with Superior Temporal Localization Ability☆97Updated 8 months ago
- GroundVLP: Harnessing Zero-shot Visual Grounding from Vision-Language Pre-training and Open-Vocabulary Object Detection (AAAI 2024)☆70Updated last year
- FreeVA: Offline MLLM as Training-Free Video Assistant☆63Updated last year
- Official implementation of CVPR 2024 paper "Retrieval-Augmented Open-Vocabulary Object Detection".☆42Updated 11 months ago
- Foundation Models for Video Understanding: A Survey☆130Updated last month