mihirp1998 / VADERLinks
Video Diffusion Alignment via Reward Gradients. We improve a variety of video diffusion models such as VideoCrafter, OpenSora, ModelScope and StableVideoDiffusion by finetuning them using various reward models such as HPS, PickScore, VideoMAE, VJEPA, YOLO, Aesthetics etc.
☆290Updated 4 months ago
Alternatives and similar repositories for VADER
Users that are interested in VADER are comparing it to the libraries listed below
Sorting:
- Code repository for T2V-Turbo and T2V-Turbo-v2☆302Updated 5 months ago
- [NeurIPS 2024] CV-VAE: A Compatible Video VAE for Latent Generative Video Models☆276Updated 7 months ago
- VisionReward: Fine-Grained Multi-Dimensional Human Preference Learning for Image and Video Generation☆278Updated 3 months ago
- [ICLR'25] MovieDreamer: Hierarchical Generation for Coherent Long Visual Sequences☆311Updated 11 months ago
- ConsistI2V: Enhancing Visual Consistency for Image-to-Video Generation [TMLR 2024]☆248Updated last year
- VARGPT-v1.1: Improve Visual Autoregressive Large Unified Model via Iterative Instruction Tuning and Reinforcement Learning☆258Updated 2 months ago
- [ICLR 2025] OpenVid-1M: A Large-Scale High-Quality Dataset for Text-to-video Generation☆313Updated last month
- [ICLR 2025] Autoregressive Video Generation without Vector Quantization☆545Updated last month
- [NeurIPS 2024 Spotlight] The official implement of research paper "MotionBooth: Motion-Aware Customized Text-to-Video Generation"☆134Updated 9 months ago
- [CVPR 2025] Aesthetic Post-Training Diffusion Models from Generic Preferences with Step-by-step Preference Optimization☆234Updated 3 months ago
- ☆200Updated 5 months ago
- [CVPR2025 Highlight] PAR: Parallelized Autoregressive Visual Generation. https://yuqingwang1029.github.io/PAR-project☆166Updated 3 months ago
- [ICLR 2025] IterComp: Iterative Composition-Aware Feedback Learning from Model Gallery for Text-to-Image Generation☆192Updated 4 months ago
- Awesome diffusion Video-to-Video (V2V). A collection of paper on diffusion model-based video editing, aka. video-to-video (V2V) translati…☆235Updated last month
- [ICLR 2024] Code for FreeNoise based on VideoCrafter☆413Updated last year
- [CVPR 2025] Official code of "DiTCtrl: Exploring Attention Control in Multi-Modal Diffusion Transformer for Tuning-Free Multi-Prompt Long…☆276Updated 3 months ago
- ☆360Updated 8 months ago
- [ICLR 2024] LLM-grounded Video Diffusion Models (LVD): official implementation for the LVD paper☆154Updated last year
- [CVPR2024] VideoBooth: Diffusion-based Video Generation with Image Prompts☆299Updated last year
- [CVPR 2024] | LAMP: Learn a Motion Pattern for Few-Shot Based Video Generation☆279Updated last year
- AlignProp uses direct reward backpropogation for the alignment of large-scale text-to-image diffusion models. Our method is 25x more samp…☆291Updated 8 months ago
- [ICCV 2025] VideoVAE+: Large Motion Video Autoencoding with Cross-modal Video VAE☆337Updated 5 months ago
- official repo for "VideoScore: Building Automatic Metrics to Simulate Fine-grained Human Feedback for Video Generation" [EMNLP2024]☆92Updated 5 months ago
- 🔥 [CVPR2024] Official implementation of "Self-correcting LLM-controlled Diffusion Models (SLD)☆179Updated last year
- ☆111Updated 3 weeks ago
- [ICCV2025] The code of our work "Golden Noise for Diffusion Models: A Learning Framework".☆158Updated 2 weeks ago
- [CVPR 2024] Code for the paper "Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model"☆233Updated last year
- [ICCV 2025] MagicMotion: Controllable Video Generation with Dense-to-Sparse Trajectory Guidance☆136Updated 2 weeks ago
- Official Pytorch Implementation for "VidToMe: Video Token Merging for Zero-Shot Video Editing" (CVPR 2024)☆220Updated 5 months ago
- The official implementation of the paper titled "StableV2V: Stablizing Shape Consistency in Video-to-Video Editing".☆158Updated 7 months ago