geshang777 / pix2cap
[arXiv'25] Official Implementation of "Pix2Cap-COCO: Advancing Visual Comprehension via Pixel-Level Captioning"
☆16Updated 3 months ago
Alternatives and similar repositories for pix2cap:
Users that are interested in pix2cap are comparing it to the libraries listed below
- VideoChat-R1: Enhancing Spatio-Temporal Perception via Reinforcement Fine-Tuning☆105Updated last week
- ☆30Updated last month
- [NeurlPS 2024] One Token to Seg Them All: Language Instructed Reasoning Segmentation in Videos☆114Updated 4 months ago
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆50Updated 3 months ago
- Vinci: A Real-time Embodied Smart Assistant based on Egocentric Vision-Language Model☆57Updated 3 months ago
- [ECCV 2024] Learning Video Context as Interleaved Multimodal Sequences☆37Updated last month
- Official repository of "Inst-IT: Boosting Multimodal Instance Understanding via Explicit Visual Prompt Instruction Tuning"☆29Updated 2 months ago
- ☆75Updated 3 weeks ago
- VistaDPO: Video Hierarchical Spatial-Temporal Direct Preference Optimization for Large Video Models☆17Updated last week
- PyTorch code for "ADEM-VL: Adaptive and Embedded Fusion for Efficient Vision-Language Tuning"☆20Updated 5 months ago
- Grounded-VideoLLM: Sharpening Fine-grained Temporal Grounding in Video Large Language Models☆103Updated last month
- FreeVA: Offline MLLM as Training-Free Video Assistant☆59Updated 10 months ago
- [EMNLP 2023] TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding☆49Updated last year
- [ECCV 2024] Elysium: Exploring Object-level Perception in Videos via MLLM☆71Updated 6 months ago
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoning☆40Updated this week
- [CVPR 2025] OVO-Bench: How Far is Your Video-LLMs from Real-World Online Video Understanding?☆54Updated 3 weeks ago
- ☆93Updated 8 months ago
- ☆35Updated 3 weeks ago
- [CVPR 2025] LLaVA-ST: A Multimodal Large Language Model for Fine-Grained Spatial-Temporal Understanding☆41Updated last month
- Benchmarking Video-LLMs on Video Spatio-Temporal Reasoning☆20Updated 3 weeks ago
- [AAAI 2025] Grounded Multi-Hop VideoQA in Long-Form Egocentric Videos☆24Updated last week
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆69Updated last month
- [NeurIPS 2024] Stabilize the Latent Space for Image Autoregressive Modeling: A Unified Perspective☆66Updated 5 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆74Updated 2 weeks ago
- 🤖 [ICLR'25] Multimodal Video Understanding Framework (MVU)☆36Updated 2 months ago
- WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs☆22Updated this week
- Code for "AVG-LLaVA: A Multimodal Large Model with Adaptive Visual Granularity"☆28Updated 6 months ago
- [ICLR 2025] TimeSuite: Improving MLLMs for Long Video Understanding via Grounded Tuning☆31Updated 2 weeks ago
- Mobile-VideoGPT: Fast and Accurate Video Understanding Language Model☆85Updated 3 weeks ago
- ☆28Updated 3 months ago