TIGER-AI-Lab / PixelWorldLinks
The official code of "PixelWorld: Towards Perceiving Everything as Pixels"
☆14Updated 4 months ago
Alternatives and similar repositories for PixelWorld
Users that are interested in PixelWorld are comparing it to the libraries listed below
Sorting:
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models☆41Updated 3 months ago
- A big_vision inspired repo that implements a generic Auto-Encoder class capable in representation learning and generative modeling.☆35Updated 11 months ago
- [ECCV'24 Oral] PiTe: Pixel-Temporal Alignment for Large Video-Language Model☆16Updated 3 months ago
- Official implementation of ECCV24 paper: POA☆24Updated 10 months ago
- The code for "VISTA: Enhancing Long-Duration and High-Resolution Video Understanding by VIdeo SpatioTemporal Augmentation" [CVPR2025]☆15Updated 3 months ago
- \infty-Video: A Training-Free Approach to Long Video Understanding via Continuous-Time Memory Consolidation☆13Updated 3 months ago
- [ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion☆45Updated 4 months ago
- [ICML 2025] VistaDPO: Video Hierarchical Spatial-Temporal Direct Preference Optimization for Large Video Models☆27Updated last month
- Fast-Slow Thinking for Large Vision-Language Model Reasoning☆14Updated last month
- This repository provides an improved LLamaGen Model, fine-tuned on 500,000 high-quality images, each accompanied by over 300 token prompt…☆30Updated 7 months ago
- ☆14Updated 7 months ago
- Official implementation of Next Block Prediction: Video Generation via Semi-Autoregressive Modeling☆31Updated 3 months ago
- ☆42Updated 6 months ago
- Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language Models☆22Updated this week
- [CVPR2025] Breaking the Low-Rank Dilemma of Linear Attention☆21Updated 2 months ago
- ☆36Updated 2 weeks ago
- Scaling Multi-modal Instruction Fine-tuning with Tens of Thousands Vision Task Types☆18Updated last month
- ☆12Updated 4 months ago
- ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆37Updated 5 months ago
- A benchmark dataset and simple code examples for measuring the perception and reasoning of multi-sensor Vision Language models.☆18Updated 5 months ago
- VideoREPA: Learning Physics for Video Generation through Relational Alignment with Foundation Models☆16Updated last week
- Official Implementation of DiffCLIP: Differential Attention Meets CLIP☆30Updated 2 months ago
- [EMNLP 2024] Preserving Multi-Modal Capabilities of Pre-trained VLMs for Improving Vision-Linguistic Compositionality☆16Updated 8 months ago
- The official repo of continuous speculative decoding☆26Updated 2 months ago
- LEO: A powerful Hybrid Multimodal LLM☆18Updated 4 months ago
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆25Updated 5 months ago
- [EMNLP 2024] Official code for "Beyond Embeddings: The Promise of Visual Table in Multi-Modal Models"☆18Updated 7 months ago
- ☆17Updated last month
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆49Updated this week
- [CVPR] MergeVQ: A Unified Framework for Visual Generation and Representation with Token Merging and Quantization☆26Updated this week