YU-deep / ViFLinks
☆34Updated 3 months ago
Alternatives and similar repositories for ViF
Users that are interested in ViF are comparing it to the libraries listed below
Sorting:
- ☆63Updated 2 months ago
- ☆19Updated last month
- [ICCV 2025] HQ-CLIP: Leveraging Large Vision-Language Models to Create High-Quality Image-Text Datasets☆62Updated 5 months ago
- CAR: Controllable AutoRegressive Modeling for Visual Generation☆128Updated last year
- Training Autoregressive Image Generation models via Reinforcement Learning☆48Updated 2 months ago
- [NIPS 2025 DB Oral] Official Repository of paper: Envisioning Beyond the Pixels: Benchmarking Reasoning-Informed Visual Editing☆139Updated 3 weeks ago
- ☆18Updated 5 months ago
- [CVPR 2025] RAP: Retrieval-Augmented Personalization☆78Updated 2 months ago
- [NeurIPS 2025 Spotlight] VisualQuality-R1 is the first open-sourced NR-IQA model can accurately describe and rate the image quality.☆150Updated 3 months ago
- ☆174Updated 7 months ago
- [CVPR 2025] CoDe: Collaborative Decoding Makes Visual Auto-Regressive Modeling Efficient☆108Updated 4 months ago
- UniGenBench++: A Unified Semantic Evaluation Benchmark for Text-to-Image Generation☆117Updated 3 weeks ago
- [ICCV25] USP: Unified Self-Supervised Pretraining for Image Generation and Understanding☆91Updated 3 months ago
- Official code for "DiffX: Guide Your Layout to Cross-Modal Generative Modeling"☆23Updated 11 months ago
- [NeurIPS2024]☆35Updated last year
- Rui Qian, Xin Yin, Dejing Dou†: Reasoning to Attend: Try to Understand How <SEG> Token Works (CVPR 2025)☆51Updated 3 months ago
- Official repository for the UAE paper, unified-GRPO, and unified-Bench☆154Updated 4 months ago
- This is the official implementation for ControlVAR.☆125Updated last year
- Official implementation of "UniLiP: Adapting CLIP for Unified Multimodal Understanding, Generation and Editing"☆125Updated 2 months ago
- WISE: A World Knowledge-Informed Semantic Evaluation for Text-to-Image Generation☆179Updated 2 months ago
- a collection of awesome autoregressive visual generation models☆79Updated 9 months ago
- ☆13Updated 11 months ago
- ☆38Updated 3 weeks ago
- [ICCV2025]Code Release of Harmonizing Visual Representations for Unified Multimodal Understanding and Generation☆185Updated 8 months ago
- Dimple, the first Discrete Diffusion Multimodal Large Language Model☆114Updated 6 months ago
- Q-Insight is open-sourced at https://github.com/bytedance/Q-Insight. This repository will not receive further updates.☆142Updated 7 months ago
- [ECCV'24] MaxFusion: Plug & Play multimodal generation in text to image diffusion models☆27Updated last year
- [ICCV2025]Generate one 2K image on single 24GB 3090 GPU!☆83Updated 4 months ago
- A Comprehensive Dataset for Advanced Image Generation and Editing}☆31Updated 3 months ago
- List of diffusion related active submissions on OpenReview for ICLR 2025.☆52Updated last year