TIGER-AI-Lab / PixelWorldLinks
The official code of "PixelWorld: Towards Perceiving Everything as Pixels" [TMLR25]
☆15Updated 2 weeks ago
Alternatives and similar repositories for PixelWorld
Users that are interested in PixelWorld are comparing it to the libraries listed below
Sorting:
- \infty-Video: A Training-Free Approach to Long Video Understanding via Continuous-Time Memory Consolidation☆16Updated 7 months ago
- Code for the paper "Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers" [ICCV 2025]☆85Updated last month
- Quick Long Video Understanding☆64Updated 3 months ago
- High-Resolution Visual Reasoning via Multi-Turn Grounding-Based Reinforcement Learning☆48Updated 2 months ago
- [NeurIPS 2025] Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024☆62Updated this week
- ☆39Updated 4 months ago
- [ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion☆50Updated 2 months ago
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆60Updated 2 months ago
- ☆32Updated 4 months ago
- ☆27Updated 2 months ago
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models☆47Updated 3 months ago
- [ICCV 2025] Dynamic-VLM☆25Updated 9 months ago
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆74Updated 2 months ago
- The code for "VISTA: Enhancing Long-Duration and High-Resolution Video Understanding by VIdeo SpatioTemporal Augmentation" [CVPR2025]☆19Updated 7 months ago
- Official implementation of ECCV24 paper: POA☆24Updated last year
- ☆56Updated 2 months ago
- [CVPR2025] Breaking the Low-Rank Dilemma of Linear Attention☆28Updated 6 months ago
- On Path to Multimodal Generalist: General-Level and General-Bench☆19Updated 2 months ago
- Code for Commonsense-T2I Challenge: Can Text-to-Image Generation Models Understand Commonsense? [COLM 2024]☆22Updated last year
- [ICLR 2025] Source code for paper "A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive Transformer for Efficient Finegr…☆78Updated 9 months ago
- ☆21Updated 4 months ago
- A benchmark dataset and simple code examples for measuring the perception and reasoning of multi-sensor Vision Language models.☆19Updated 9 months ago
- Official implementation of "Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology"☆64Updated 2 months ago
- Evaluating Deep Multimodal Reasoning in Vision-Centric Agentic Tasks☆30Updated last month
- [EMNLP-2025 Oral] ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆53Updated 3 weeks ago
- [NeurIPS 2025] VeriThinker: Learning to Verify Makes Reasoning Model Efficient☆53Updated last week
- [ACL2025 Findings] Benchmarking Multihop Multimodal Internet Agents☆46Updated 6 months ago
- Official Implementation of Muddit [Meissonic II]: Liberating Generation Beyond Text-to-Image with a Unified Discrete Diffusion Model.☆85Updated last month
- [NeurIPS 2024] TransAgent: Transfer Vision-Language Foundation Models with Heterogeneous Agent Collaboration☆24Updated 11 months ago
- [NeurIPS 2024] Stabilize the Latent Space for Image Autoregressive Modeling: A Unified Perspective☆71Updated 10 months ago