inFaaa / Awesome-Personalized-Video-CreationLinks
π This is a repository for organizing papers, codes, and other resources related to personalized video generation and editing.
β49Updated 2 weeks ago
Alternatives and similar repositories for Awesome-Personalized-Video-Creation
Users that are interested in Awesome-Personalized-Video-Creation are comparing it to the libraries listed below
Sorting:
- β144Updated last month
- Structured Video Comprehension of Real-World Shortsβ132Updated this week
- A collection of vision foundation models unifying understanding and generation.β57Updated 7 months ago
- GoT-R1: Unleashing Reasoning Capability of MLLM for Visual Generation with Reinforcement Learningβ94Updated 2 months ago
- Official repository of "GoT: Unleashing Reasoning Capability of Multimodal Large Language Model for Visual Generation and Editing"β272Updated 3 months ago
- WISE: A World Knowledge-Informed Semantic Evaluation for Text-to-Image Generationβ136Updated last month
- Code for MetaMorph Multimodal Understanding and Generation via Instruction Tuningβ200Updated 3 months ago
- [ICML 2025] This is the official PyTorch implementation of "π΅ HarmoniCa: Harmonizing Training and Inference for Better Feature Caching iβ¦β41Updated 3 weeks ago
- [ICCV2025]Code Release of Harmonizing Visual Representations for Unified Multimodal Understanding and Generationβ145Updated 2 months ago
- β101Updated last month
- Dimple, the first Discrete Diffusion Multimodal Large Language Modelβ85Updated 3 weeks ago
- β30Updated 8 months ago
- Official Repository of paper: Envisioning Beyond the Pixels: Benchmarking Reasoning-Informed Visual Editingβ79Updated 2 weeks ago
- [ICCV 2025] GameFactory: Creating New Games with Generative Interactive Videosβ337Updated 4 months ago
- TokLIP: Marry Visual Tokens to CLIP for Multimodal Comprehension and Generationβ103Updated 2 months ago
- β87Updated last month
- Empowering Unified MLLM with Multi-granular Visual Generationβ127Updated 6 months ago
- VideoREPA: Learning Physics for Video Generation through Relational Alignment with Foundation Modelsβ54Updated 2 months ago
- [NeurIPS 2024] The official implement of research paper "FreeLong : Training-Free Long Video Generation with SpectralBlend Temporal Attenβ¦β55Updated last month
- GPT as a Monte Carlo Language Tree: A Probabilistic Perspectiveβ45Updated 6 months ago
- [ICML2025] The code and data of Paper: Towards World Simulator: Crafting Physical Commonsense-Based Benchmark for Video Generationβ116Updated 9 months ago
- ACTIVE-O3: Empowering Multimodal Large Language Models with Active Perception via GRPOβ68Updated 2 months ago
- The code repository of UniRLβ36Updated 2 months ago
- PhysGame Benchmark for Physical Commonsense Evaluation in Gameplay Videosβ45Updated last month
- Official Implementation of Muddit [Meissonic II]: Liberating Generation Beyond Text-to-Image with a Unified Discrete Diffusion Model.β75Updated last week
- [CVPRW 2025] UniToken is an auto-regressive generation model that combines discrete and continuous representations to process visual inpuβ¦β86Updated 3 months ago
- [CVPR 2025 (Oral)] Open implementation of "RandAR"β182Updated 3 weeks ago
- Official Repo of Omni-R1: Reinforcement Learning for Omnimodal Reasoning via Two-System Collaborationβ73Updated 2 months ago
- Official repository of "Beyond Fixed: Variable-Length Denoising for Diffusion Large Language Models"β64Updated this week
- Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens (arXiv 2025)β115Updated this week