Yuanshi9815 / Video-InfinityLinks
Video-Infinity generates long videos quickly using multiple GPUs without extra training.
☆181Updated 11 months ago
Alternatives and similar repositories for Video-Infinity
Users that are interested in Video-Infinity are comparing it to the libraries listed below
Sorting:
- Code repository for T2V-Turbo and T2V-Turbo-v2☆302Updated 5 months ago
- [AAAI 2025] Official pytorch implementation of "VideoElevator: Elevating Video Generation Quality with Versatile Text-to-Image Diffusion …☆160Updated last year
- Pusa: Thousands Timesteps Video Diffusion Model☆200Updated 3 weeks ago
- [ICLR'25] MovieDreamer: Hierarchical Generation for Coherent Long Visual Sequences☆311Updated 11 months ago
- [ICLR 2025] FasterCache: Training-Free Video Diffusion Model Acceleration with High Quality☆237Updated 6 months ago
- Live2Diff: A Pipeline that processes Live video streams by a uni-directional video Diffusion model.☆186Updated 11 months ago
- [ICLR 2025] IterComp: Iterative Composition-Aware Feedback Learning from Model Gallery for Text-to-Image Generation☆192Updated 4 months ago
- InteractiveVideo: User-Centric Controllable Video Generation with Synergistic Multimodal Instructions☆128Updated last year
- [AAAI 2025] Follow-Your-Canvas: This repo is the official implementation of "Follow-Your-Canvas: Higher-Resolution Video Outpainting with…☆134Updated 8 months ago
- Finetuning and inference tools for the CogView4 and CogVideoX model series.☆81Updated 2 months ago
- The official implementation of ”RepVideo: Rethinking Cross-Layer Representation for Video Generation“☆117Updated 5 months ago
- [CVPR 2025] Consistent and Controllable Image Animation with Motion Diffusion Models☆283Updated last month
- [SIGGRAPH 2024] Motion I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling☆171Updated 9 months ago
- Official Implementation: Training-Free Efficient Video Generation via Dynamic Token Carving☆214Updated 2 weeks ago
- Paint by Inpaint: Learning to Add Image Objects by Removing Them First☆107Updated last month
- This respository contains the code for the NeurIPS 2024 paper SF-V: Single Forward Video Generation Model.☆97Updated 7 months ago
- Video Diffusion Alignment via Reward Gradients. We improve a variety of video diffusion models such as VideoCrafter, OpenSora, ModelScope…☆290Updated 4 months ago
- [NeurIPS 2024 Spotlight] The official implement of research paper "MotionBooth: Motion-Aware Customized Text-to-Video Generation"☆134Updated 9 months ago
- [NeurIPS 2024] AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising☆202Updated 4 months ago
- [TOG 2024]StyleCrafter: Enhancing Stylized Text-to-Video Generation with Style Adapter☆242Updated 3 months ago
- DCM: Dual-Expert Consistency Model for Efficient and High-Quality Video Generation☆175Updated last month
- ☆167Updated 3 months ago
- [IJCAI 2025] Offical implementation of the paper "MagicTailor: Component-Controllable Personalization in Text-to-Image Diffusion Models"…☆87Updated 2 months ago
- Inference-time scaling of diffusion-based image and video generation models.☆156Updated 2 weeks ago
- MoMA: Multimodal LLM Adapter for Fast Personalized Image Generation☆232Updated last year
- Keyframe Interpolation with CogvideoX☆136Updated 8 months ago
- ☆360Updated 8 months ago
- Awesome diffusion Video-to-Video (V2V). A collection of paper on diffusion model-based video editing, aka. video-to-video (V2V) translati…☆235Updated last month
- ☆85Updated 10 months ago
- [NeurIPS 2024] VideoTetris: Towards Compositional Text-To-Video Generation☆222Updated 8 months ago