yliu-cs / PiTeLinks
[ECCV'24 Oral] PiTe: Pixel-Temporal Alignment for Large Video-Language Model
☆17Updated 8 months ago
Alternatives and similar repositories for PiTe
Users that are interested in PiTe are comparing it to the libraries listed below
Sorting:
- TEMPURA enables video-language models to reason about causal event relationships and generate fine-grained, timestamped descriptions of u…☆23Updated 4 months ago
- [ECCV 2024] Learning Video Context as Interleaved Multimodal Sequences☆40Updated 7 months ago
- Official InfiniBench: A Benchmark for Large Multi-Modal Models in Long-Form Movies and TV Shows☆18Updated 2 months ago
- (ICCV2025) Official repository of paper "ViSpeak: Visual Instruction Feedback in Streaming Videos"☆40Updated 3 months ago
- ☆33Updated 6 months ago
- On Path to Multimodal Generalist: General-Level and General-Bench☆19Updated 3 months ago
- ☆28Updated 4 months ago
- High-Resolution Visual Reasoning via Multi-Turn Grounding-Based Reinforcement Learning☆50Updated 3 months ago
- UnifiedMLLM: Enabling Unified Representation for Multi-modal Multi-tasks With Large Language Model☆22Updated last year
- [NeurIPS 2025] The official repository of "Inst-IT: Boosting Multimodal Instance Understanding via Explicit Visual Prompt Instruction Tun…☆37Updated 8 months ago
- Text-Only Data Synthesis for Vision Language Model Training☆22Updated 4 months ago
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆30Updated 10 months ago
- [EMNLP 2024] Preserving Multi-Modal Capabilities of Pre-trained VLMs for Improving Vision-Linguistic Compositionality☆19Updated last year
- WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs☆31Updated last month
- [NeurIPS 2024] TransAgent: Transfer Vision-Language Foundation Models with Heterogeneous Agent Collaboration☆24Updated last year
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆75Updated 3 months ago
- A unified framework for controllable caption generation across images, videos, and audio. Supports multi-modal inputs and customizable ca…☆51Updated 3 months ago
- Official repository for LLaVA-Reward (ICCV 2025): Multimodal LLMs as Customized Reward Models for Text-to-Image Generation☆20Updated 3 months ago
- The code for "VISTA: Enhancing Long-Duration and High-Resolution Video Understanding by VIdeo SpatioTemporal Augmentation" [CVPR2025]☆19Updated 8 months ago
- [ICLR 2025] IDA-VLM: Towards Movie Understanding via ID-Aware Large Vision-Language Model☆36Updated 11 months ago
- [ICCV 2025 Oral] Official implementation of Learning Streaming Video Representation via Multitask Training.☆60Updated last month
- [ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion☆53Updated 3 months ago
- Code release for the paper "Progress-Aware Video Frame Captioning" (CVPR 2025)☆18Updated 3 months ago
- [NeurIPS 2025] HermesFlow: Seamlessly Closing the Gap in Multimodal Understanding and Generation☆71Updated last month
- Agentic Keyframe Search for Video Question Answering☆11Updated 6 months ago
- [ICCV 2025] Dynamic-VLM☆25Updated 10 months ago
- [CVPR 2025] OmniMMI: A Comprehensive Multi-modal Interaction Benchmark in Streaming Video Contexts☆19Updated 6 months ago
- SFT+RL boosts multimodal reasoning☆37Updated 4 months ago
- Offical repo for CAT-V - Caption Anything in Video: Object-centric Dense Video Captioning with Spatiotemporal Multimodal Prompting☆56Updated 3 months ago
- [CVPR 2025] DiscoVLA: Discrepancy Reduction in Vision, Language, and Alignment for Parameter-Efficient Video-Text Retrieval☆20Updated 4 months ago