TIGER-AI-Lab / PixelWorldLinks
The official code of "PixelWorld: Towards Perceiving Everything as Pixels" [TMLR25]
☆15Updated last month
Alternatives and similar repositories for PixelWorld
Users that are interested in PixelWorld are comparing it to the libraries listed below
Sorting:
- \infty-Video: A Training-Free Approach to Long Video Understanding via Continuous-Time Memory Consolidation☆18Updated 8 months ago
- Code for the paper "Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers" [ICCV 2025]☆91Updated 3 months ago
- ☆62Updated 3 months ago
- ☆24Updated 7 months ago
- ☆39Updated 5 months ago
- Quick Long Video Understanding☆68Updated last week
- [ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion☆54Updated 4 months ago
- High-Resolution Visual Reasoning via Multi-Turn Grounding-Based Reinforcement Learning☆51Updated 3 months ago
- ☆21Updated 5 months ago
- The code for "VISTA: Enhancing Long-Duration and High-Resolution Video Understanding by VIdeo SpatioTemporal Augmentation" [CVPR2025]☆20Updated 8 months ago
- A benchmark dataset and simple code examples for measuring the perception and reasoning of multi-sensor Vision Language models.☆19Updated 10 months ago
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆60Updated 3 months ago
- On Path to Multimodal Generalist: General-Level and General-Bench☆19Updated 3 months ago
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models☆49Updated 4 months ago
- Explore how to get a VQ-VAE models efficiently!☆62Updated 3 months ago
- Official code of the paper "VideoMolmo: Spatio-Temporal Grounding meets Pointing"☆53Updated 4 months ago
- The official repo for LIFT: Language-Image Alignment with Fixed Text Encoders☆36Updated 4 months ago
- The official repository of our paper "Reinforcing Video Reasoning with Focused Thinking"☆26Updated 4 months ago
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆76Updated 3 months ago
- [ICLR 2025] Source code for paper "A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive Transformer for Efficient Finegr…☆77Updated 10 months ago
- [EMNLP-2025 Oral] ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆59Updated 2 months ago
- Code for Commonsense-T2I Challenge: Can Text-to-Image Generation Models Understand Commonsense? [COLM 2024]☆25Updated last year
- ☆33Updated 5 months ago
- Official implementation of ECCV24 paper: POA☆24Updated last year
- Official implementation of Bifrost-1: Bridging Multimodal LLMs and Diffusion Models with Patch-level CLIP Latents (NeurIPS 2025)☆41Updated last month
- [Preprint] GMem: A Modular Approach for Ultra-Efficient Generative Models☆40Updated 7 months ago
- [NeurIPS 2024] TransAgent: Transfer Vision-Language Foundation Models with Heterogeneous Agent Collaboration☆24Updated last year
- [ICCV 2025] Dynamic-VLM☆26Updated 10 months ago
- PyTorch code for "ADEM-VL: Adaptive and Embedded Fusion for Efficient Vision-Language Tuning"☆20Updated last year
- LVAS-Agent Code Base☆21Updated 6 months ago