mihirp1998 / VADERLinks
Video Diffusion Alignment via Reward Gradients. We improve a variety of video diffusion models such as VideoCrafter, OpenSora, ModelScope and StableVideoDiffusion by finetuning them using various reward models such as HPS, PickScore, VideoMAE, VJEPA, YOLO, Aesthetics etc.
☆300Updated 8 months ago
Alternatives and similar repositories for VADER
Users that are interested in VADER are comparing it to the libraries listed below
Sorting:
- Code repository for T2V-Turbo and T2V-Turbo-v2☆306Updated 9 months ago
- [NeurIPS 2024 Spotlight] The official implement of research paper "MotionBooth: Motion-Aware Customized Text-to-Video Generation"☆138Updated last year
- [ICLR'25] MovieDreamer: Hierarchical Generation for Coherent Long Visual Sequences☆318Updated last year
- [NeurIPS 2024] CV-VAE: A Compatible Video VAE for Latent Generative Video Models☆284Updated 11 months ago
- [AAAI 2026] VisionReward: Fine-Grained Multi-Dimensional Human Preference Learning for Image and Video Generation☆332Updated 7 months ago
- [CVPR 2025] Aesthetic Post-Training Diffusion Models from Generic Preferences with Step-by-step Preference Optimization☆255Updated 7 months ago
- ConsistI2V: Enhancing Visual Consistency for Image-to-Video Generation [TMLR 2024]☆254Updated last year
- [ICLR 2024] LLM-grounded Video Diffusion Models (LVD): official implementation for the LVD paper☆161Updated last year
- Awesome diffusion Video-to-Video (V2V). A collection of paper on diffusion model-based video editing, aka. video-to-video (V2V) translati…☆259Updated 5 months ago
- [ICLR 2025] OpenVid-1M: A Large-Scale High-Quality Dataset for Text-to-video Generation☆364Updated 5 months ago
- [ICLR 2025] IterComp: Iterative Composition-Aware Feedback Learning from Model Gallery for Text-to-Image Generation☆200Updated 8 months ago
- VARGPT-v1.1: Improve Visual Autoregressive Large Unified Model via Iterative Instruction Tuning and Reinforcement Learning☆268Updated 7 months ago
- official repo for "VideoScore: Building Automatic Metrics to Simulate Fine-grained Human Feedback for Video Generation" [EMNLP2024]☆104Updated 9 months ago
- GPT-IMAGE-EDIT-1.5M: A Million-Scale, GPT-Generated Image Dataset☆233Updated 3 months ago
- [CVPR 2024] Code for the paper "Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model"☆242Updated last year
- Official inference code and LongText-Bench benchmark for our paper X-Omni (https://arxiv.org/pdf/2507.22058).☆390Updated 2 months ago
- Official Pytorch Implementation for "VidToMe: Video Token Merging for Zero-Shot Video Editing" (CVPR 2024)☆222Updated 9 months ago
- [CVPR2025 Highlight] PAR: Parallelized Autoregressive Visual Generation. https://yuqingwang1029.github.io/PAR-project☆178Updated 7 months ago
- [ICCV2025] The code of our work "Golden Noise for Diffusion Models: A Learning Framework".☆188Updated 3 months ago
- Official implementation of HPSv3: Towards Wide-Spectrum Human Preference Score (ICCV2025)☆215Updated 2 months ago
- [CVPR 2025] Official code of "DiTCtrl: Exploring Attention Control in Multi-Modal Diffusion Transformer for Tuning-Free Multi-Prompt Long…☆307Updated 7 months ago
- ☆210Updated 9 months ago
- AlignProp uses direct reward backpropogation for the alignment of large-scale text-to-image diffusion models. Our method is 25x more samp…☆302Updated last year
- [ICLR 2025] Autoregressive Video Generation without Vector Quantization☆593Updated 2 weeks ago
- Implementation of "S^2-Guidance: Stochastic Self Guidance for Training-Free Enhancement of Diffusion Models"☆142Updated last month
- Inference-time scaling of diffusion-based image and video generation models.☆169Updated 4 months ago
- ☆121Updated 2 months ago
- [CVPR 2024] | LAMP: Learn a Motion Pattern for Few-Shot Based Video Generation☆278Updated last year
- [ICLR 2024] Code for FreeNoise based on VideoCrafter☆418Updated 2 months ago
- Code release for our NeurIPS 2024 Spotlight paper "GenArtist: Multimodal LLM as an Agent for Unified Image Generation and Editing"☆156Updated last year