yliu-cs / PiTe
[ECCV'24 Oral] PiTe: Pixel-Temporal Alignment for Large Video-Language Model
☆16Updated 2 months ago
Alternatives and similar repositories for PiTe:
Users that are interested in PiTe are comparing it to the libraries listed below
- The code for "VISTA: Enhancing Long-Duration and High-Resolution Video Understanding by VIdeo SpatioTemporal Augmentation" [CVPR2025]☆15Updated last month
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆23Updated 3 months ago
- WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs☆20Updated last month
- [ECCV 2024] Learning Video Context as Interleaved Multimodal Sequences☆38Updated last month
- This is the official repo for ByteVideoLLM/Dynamic-VLM☆20Updated 3 months ago
- ☆9Updated 3 months ago
- ☆14Updated 6 months ago
- Official Repository of Personalized Visual Instruct Tuning☆28Updated last month
- ☆9Updated 10 months ago
- Official implementation of Next Block Prediction: Video Generation via Semi-Autoregressive Modeling☆27Updated 2 months ago
- Video Diffusion State Space Models☆19Updated last year
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆35Updated 9 months ago
- [ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion☆41Updated 2 months ago
- ☆28Updated 4 months ago
- [CVPR2025] Breaking the Low-Rank Dilemma of Linear Attention☆13Updated last month
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆89Updated 2 months ago
- HermesFlow: Seamlessly Closing the Gap in Multimodal Understanding and Generation☆54Updated last month
- Scaling Multi-modal Instruction Fine-tuning with Tens of Thousands Vision Task Types☆15Updated last month
- Codebase for the paper-Elucidating the design space of language models for image generation☆45Updated 4 months ago
- Official repository of "CoMP: Continual Multimodal Pre-training for Vision Foundation Models"☆22Updated last week
- Code for the paper "Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers"☆60Updated 3 weeks ago
- Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆56Updated last year
- ☆13Updated 4 months ago
- Code for "VideoRepair: Improving Text-to-Video Generation via Misalignment Evaluation and Localized Refinement"☆45Updated 4 months ago
- [NeurIPS'24] I2EBench: A Comprehensive Benchmark for Instruction-based Image Editing☆19Updated 4 months ago
- Training code for CLIP-FlanT5☆26Updated 8 months ago
- [ECCV 2024] R2-Bench: Benchmarking the Robustness of Referring Perception Models under Perturbations☆10Updated 8 months ago
- The official repo of continuous speculative decoding☆24Updated 2 weeks ago
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"☆37Updated 4 months ago
- Official implementation of paper ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video Understanding☆29Updated 3 weeks ago