SxJyJay / UniTokenLinks
[CVPRW 2025] UniToken is an auto-regressive generation model that combines discrete and continuous representations to process visual inputs, making it easy to integrate both visual understanding and image generation tasks seamlessly.
☆86Updated 2 months ago
Alternatives and similar repositories for UniToken
Users that are interested in UniToken are comparing it to the libraries listed below
Sorting:
- WISE: A World Knowledge-Informed Semantic Evaluation for Text-to-Image Generation☆120Updated last week
- Official implementation of LiFT: Leveraging Human Feedback for Text-to-Video Model Alignment.☆77Updated last month
- 【CVPR 2025 Oral】Official Repo for Paper "AnyEdit: Mastering Unified High-Quality Image Editing for Any Idea"☆150Updated 2 months ago
- Official Repository of paper: Envisioning Beyond the Pixels: Benchmarking Reasoning-Informed Visual Editing☆60Updated 2 weeks ago
- Code Release of Harmonizing Visual Representations for Unified Multimodal Understanding and Generation☆124Updated last month
- ☆115Updated this week
- ☆79Updated 7 months ago
- MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Models☆40Updated 2 months ago
- [CVPR 2025] 🔥 Official impl. of "TokenFlow: Unified Image Tokenizer for Multimodal Understanding and Generation".☆337Updated 3 months ago
- ☆86Updated 3 months ago
- TokLIP: Marry Visual Tokens to CLIP for Multimodal Comprehension and Generation☆71Updated 2 weeks ago
- Pytorch implementation for the paper titled "SimpleAR: Pushing the Frontier of Autoregressive Visual Generation"☆371Updated this week
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆109Updated 3 months ago
- a collection of awesome autoregressive visual generation models☆73Updated 2 months ago
- ☆151Updated 5 months ago
- The Next Step Forward in Multimodal LLM Alignment☆164Updated last month
- ☆21Updated 5 months ago
- GoT-R1: Unleashing Reasoning Capability of MLLM for Visual Generation with Reinforcement Learning☆77Updated 3 weeks ago
- Empowering Unified MLLM with Multi-granular Visual Generation☆124Updated 5 months ago
- Implements VAR+CLIP for text-to-image (T2I) generation☆139Updated 5 months ago
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆111Updated 2 weeks ago
- ☆98Updated 2 months ago
- Unified Multi-modal IAA Baseline and Benchmark☆79Updated 8 months ago
- [CVPR 2025] T2V-CompBench: A Comprehensive Benchmark for Compositional Text-to-video Generation☆86Updated 3 weeks ago
- Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuning☆49Updated last month
- [CVPR 2025 (Oral)] Open implementation of "RandAR"☆175Updated 3 months ago
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆148Updated 6 months ago
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoning☆74Updated last month
- ☆101Updated last week
- [NeurIPS 2024] Visual Perception by Large Language Model’s Weights☆45Updated 2 months ago