yliu-cs / PiTeLinks
[ECCV'24 Oral] PiTe: Pixel-Temporal Alignment for Large Video-Language Model
☆16Updated 5 months ago
Alternatives and similar repositories for PiTe
Users that are interested in PiTe are comparing it to the libraries listed below
Sorting:
- ☆20Updated last month
- TEMPURA enables video-language models to reason about causal event relationships and generate fine-grained, timestamped descriptions of u…☆21Updated 2 months ago
- Text-Only Data Synthesis for Vision Language Model Training☆21Updated 2 months ago
- UnifiedMLLM: Enabling Unified Representation for Multi-modal Multi-tasks With Large Language Model☆22Updated last year
- Official repository of "Inst-IT: Boosting Multimodal Instance Understanding via Explicit Visual Prompt Instruction Tuning"☆35Updated 5 months ago
- ☆32Updated 4 months ago
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆26Updated 7 months ago
- On Path to Multimodal Generalist: General-Level and General-Bench☆19Updated last month
- HermesFlow: Seamlessly Closing the Gap in Multimodal Understanding and Generation☆63Updated 5 months ago
- The code for "VISTA: Enhancing Long-Duration and High-Resolution Video Understanding by VIdeo SpatioTemporal Augmentation" [CVPR2025]☆19Updated 5 months ago
- [ECCV 2024] Learning Video Context as Interleaved Multimodal Sequences☆40Updated 5 months ago
- WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs☆27Updated 3 months ago
- [ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion☆48Updated last month
- (ICCV2025) Official repository of paper "ViSpeak: Visual Instruction Feedback in Streaming Videos"☆37Updated last month
- Code for "VideoRepair: Improving Text-to-Video Generation via Misalignment Evaluation and Localized Refinement"☆49Updated 8 months ago
- [NeurIPS 2024] Stabilize the Latent Space for Image Autoregressive Modeling: A Unified Perspective☆69Updated 9 months ago
- Official InfiniBench: A Benchmark for Large Multi-Modal Models in Long-Form Movies and TV Shows☆15Updated 2 months ago
- [ICLR 2025] IDA-VLM: Towards Movie Understanding via ID-Aware Large Vision-Language Model☆31Updated 8 months ago
- Code release for the paper "Progress-Aware Video Frame Captioning" (CVPR 2025)☆14Updated 3 weeks ago
- [EMNLP 2024] Preserving Multi-Modal Capabilities of Pre-trained VLMs for Improving Vision-Linguistic Compositionality☆17Updated 10 months ago
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆63Updated 3 weeks ago
- Offical repo for CAT-V - Caption Anything in Video: Object-centric Dense Video Captioning with Spatiotemporal Multimodal Prompting☆48Updated last month
- ☆24Updated last year
- High-Resolution Visual Reasoning via Multi-Turn Grounding-Based Reinforcement Learning☆45Updated 2 weeks ago
- [ICCV 2025 Oral] Official implementation of Learning Streaming Video Representation via Multitask Training.☆31Updated 2 weeks ago
- Official code for MotionBench (CVPR 2025)☆54Updated 5 months ago
- Unifying Specialized Visual Encoders for Video Language Models☆21Updated 3 weeks ago
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆80Updated 5 months ago
- FQGAN: Factorized Visual Tokenization and Generation☆52Updated 4 months ago
- Codebase for the paper-Elucidating the design space of language models for image generation☆45Updated 8 months ago