yliu-cs / PiTeLinks
[ECCV'24 Oral] PiTe: Pixel-Temporal Alignment for Large Video-Language Model
☆16Updated 5 months ago
Alternatives and similar repositories for PiTe
Users that are interested in PiTe are comparing it to the libraries listed below
Sorting:
- Official InfiniBench: A Benchmark for Large Multi-Modal Models in Long-Form Movies and TV Shows☆15Updated last month
- [ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion☆48Updated 2 weeks ago
- WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs☆26Updated 2 months ago
- ☆19Updated last month
- LMM solved catastrophic forgetting, AAAI2025☆44Updated 3 months ago
- ☆9Updated last year
- TEMPURA enables video-language models to reason about causal event relationships and generate fine-grained, timestamped descriptions of u…☆19Updated last month
- VPEval Codebase from Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆45Updated last year
- ☆32Updated 3 months ago
- ☆24Updated last year
- [ECCV 2024] Learning Video Context as Interleaved Multimodal Sequences☆39Updated 4 months ago
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆26Updated 6 months ago
- UnifiedMLLM: Enabling Unified Representation for Multi-modal Multi-tasks With Large Language Model☆22Updated 11 months ago
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆60Updated this week
- [ICML 2025] VistaDPO: Video Hierarchical Spatial-Temporal Direct Preference Optimization for Large Video Models☆27Updated last month
- Official implementation of Next Block Prediction: Video Generation via Semi-Autoregressive Modeling☆37Updated 5 months ago
- ☆42Updated 8 months ago
- [ICCV 2025] Dynamic-VLM☆23Updated 7 months ago
- Fast-Slow Thinking for Large Vision-Language Model Reasoning☆16Updated 2 months ago
- Code for paper: Unified Text-to-Image Generation and Retrieval☆15Updated last year
- On Path to Multimodal Generalist: General-Level and General-Bench☆17Updated this week
- HermesFlow: Seamlessly Closing the Gap in Multimodal Understanding and Generation☆63Updated 4 months ago
- Code release for the paper "Progress-Aware Video Frame Captioning" (CVPR 2025)☆11Updated 2 months ago
- [EMNLP 2024] Preserving Multi-Modal Capabilities of Pre-trained VLMs for Improving Vision-Linguistic Compositionality☆16Updated 9 months ago
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"☆41Updated 7 months ago
- ☆31Updated last year
- The code for "VISTA: Enhancing Long-Duration and High-Resolution Video Understanding by VIdeo SpatioTemporal Augmentation" [CVPR2025]☆18Updated 4 months ago
- [EMNLP 2024] Official code for "Beyond Embeddings: The Promise of Visual Table in Multi-Modal Models"☆20Updated 9 months ago
- Code for our Paper "All in an Aggregated Image for In-Image Learning"☆30Updated last year
- High-Resolution Visual Reasoning via Multi-Turn Grounding-Based Reinforcement Learning☆29Updated last week