TIGER-AI-Lab / PixelWorldLinks
The official code of "PixelWorld: Towards Perceiving Everything as Pixels"
☆14Updated 5 months ago
Alternatives and similar repositories for PixelWorld
Users that are interested in PixelWorld are comparing it to the libraries listed below
Sorting:
- Autoregressive Semantic Visual Reconstruction Helps VLMs Understand Better☆32Updated last month
- \infty-Video: A Training-Free Approach to Long Video Understanding via Continuous-Time Memory Consolidation☆14Updated 5 months ago
- ☆35Updated last week
- A benchmark dataset and simple code examples for measuring the perception and reasoning of multi-sensor Vision Language models.☆18Updated 6 months ago
- WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs☆26Updated 2 months ago
- [ECCV'24 Oral] PiTe: Pixel-Temporal Alignment for Large Video-Language Model☆16Updated 5 months ago
- Code for the paper "Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers" [ICCV 2025]☆73Updated 3 weeks ago
- [ICML 2025] VistaDPO: Video Hierarchical Spatial-Temporal Direct Preference Optimization for Large Video Models☆27Updated last month
- Official Pytorch implementation of "Vision Transformers Don't Need Trained Registers"☆75Updated 3 weeks ago
- [CVPR2025] Official code repository for SeTa: "Scale Efficient Training for Large Datasets"☆18Updated 4 months ago
- The code for "VISTA: Enhancing Long-Duration and High-Resolution Video Understanding by VIdeo SpatioTemporal Augmentation" [CVPR2025]☆18Updated 4 months ago
- The official repo of continuous speculative decoding☆27Updated 3 months ago
- ☆42Updated 8 months ago
- Official implementation and dataset for the NAACL 2024 paper "ComCLIP: Training-Free Compositional Image and Text Matching"☆35Updated 11 months ago
- Official Repository of Personalized Visual Instruct Tuning☆31Updated 4 months ago
- Official implementation of ECCV24 paper: POA☆24Updated 11 months ago
- [ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion☆48Updated 2 weeks ago
- OLA-VLM: Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024☆60Updated 4 months ago
- ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆46Updated 6 months ago
- On Path to Multimodal Generalist: General-Level and General-Bench☆17Updated last week
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models☆45Updated last month
- ☆22Updated 3 months ago
- Official InfiniBench: A Benchmark for Large Multi-Modal Models in Long-Form Movies and TV Shows☆15Updated last month
- Official implementation of "Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology"☆41Updated this week
- This is the implementation of CounterCurate, the data curation pipeline of both physical and semantic counterfactual image-caption pairs.☆18Updated last year
- Official implementation of "PyVision: Agentic Vision with Dynamic Tooling."☆69Updated last week
- ☆12Updated 6 months ago
- Open source community's implementation of the model from "LANGUAGE MODEL BEATS DIFFUSION — TOKENIZER IS KEY TO VISUAL GENERATION"☆15Updated 8 months ago
- Quick Long Video Understanding☆58Updated last month
- [EMNLP 2024] Official code for "Beyond Embeddings: The Promise of Visual Table in Multi-Modal Models"☆20Updated 9 months ago