Live2Diff: A Pipeline that processes Live video streams by a uni-directional video Diffusion model.
☆199Jul 22, 2024Updated last year
Alternatives and similar repositories for Live2Diff
Users that are interested in Live2Diff are comparing it to the libraries listed below
Sorting:
- ☆385Jun 6, 2024Updated last year
- [ECCV 2024] MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model.☆762Dec 5, 2024Updated last year
- [CVPR 2025] Consistent and Controllable Image Animation with Motion Diffusion Models☆294May 17, 2025Updated 9 months ago
- Video-Infinity generates long videos quickly using multiple GPUs without extra training.☆191Aug 4, 2024Updated last year
- Code repository for T2V-Turbo and T2V-Turbo-v2☆311Jan 31, 2025Updated last year
- [CVPR 2024] Make-It-Vivid: Dressing Your Animatable Biped Cartoon Characters from Text☆71Jun 17, 2024Updated last year
- High-Quality Human Motion Video Generation with Confidence-aware Pose Guidance☆2,530Nov 18, 2025Updated 3 months ago
- [NeurIPS 2024 Spotlight] The official implement of research paper "MotionBooth: Motion-Aware Customized Text-to-Video Generation"☆138Oct 8, 2024Updated last year
- Codes for ID-Specific Video Customized Diffusion☆462Feb 22, 2024Updated 2 years ago
- Text-Guided Generation of Full-Body Image with Preserved Reference Face for Customized Animation☆24Jun 24, 2024Updated last year
- A one-stop library to standardize the inference and evaluation of all the conditional video generation models.☆51Feb 13, 2025Updated last year
- ☆17Jul 30, 2024Updated last year
- [ECCV 2024] AnyControl, a multi-control image synthesis model that supports any combination of user provided control signals. 一个支持用户自由输入控…☆128Jul 5, 2024Updated last year
- [ICLR 2025] Official implementation of MotionClone: Training-Free Motion Cloning for Controllable Video Generation☆515Jun 17, 2025Updated 8 months ago
- [ICLR'25] MovieDreamer: Hierarchical Generation for Coherent Long Visual Sequences☆321Aug 10, 2024Updated last year
- [CVPR 2025] StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text☆1,627Mar 27, 2025Updated 11 months ago
- [AAAI 2026] Minute-Long Videos with Dual Parallelisms☆45Nov 12, 2025Updated 3 months ago
- Diffree: Text-Guided Shape Free Object Inpainting with Diffusion Model☆240May 5, 2025Updated 9 months ago
- Official implementation of FIFO-Diffusion: Generating Infinite Videos from Text without Training (NeurIPS 2024)☆481Oct 18, 2024Updated last year
- [CVPR 2024] Make-Your-Anchor: A Diffusion-based 2D Avatar Generation Framework.☆357Jan 28, 2025Updated last year
- [NeurIPS D&B Track 2024] Official implementation of HumanVid☆346Oct 14, 2025Updated 4 months ago
- [ECCV 2024] FreeInit: Bridging Initialization Gap in Video Diffusion Models☆543Jan 18, 2024Updated 2 years ago
- [SIGGRAPH ASIA 2024 TCS] AnimateLCM: Computation-Efficient Personalized Style Video Generation without Personalized Video Data☆659Oct 22, 2024Updated last year
- StyleShot: A SnapShot on Any Style. 一款可以迁移任意风格到任意内容的模型,无需针对图片微调,即能生成高质量的个性风格化图片!☆456Jun 30, 2025Updated 8 months ago
- [SIGGRAPH 2025] Official implementation of 'Motion Inversion For Video Customization'☆153Oct 22, 2024Updated last year
- [NeurIPS'2024] Invertible Consistency Distillation for Text-Guided Image Editing in Around 7 Steps☆101Jul 4, 2024Updated last year
- AnimateDiff I2V version.☆185Mar 1, 2024Updated last year
- Code for SCIS-2025 Paper "UniAnimate: Taming Unified Video Diffusion Models for Consistent Human Image Animation".☆1,187Apr 15, 2025Updated 10 months ago
- implementation of AnimateDiff.☆32Jul 14, 2023Updated 2 years ago
- [CVPR 2024] PIA, your Personalized Image Animator. Animate your images by text prompt, combing with Dreambooth, achieving stunning videos…☆979Aug 5, 2024Updated last year
- [ICLR 2024] Code for FreeNoise based on VideoCrafter☆427Aug 25, 2025Updated 6 months ago
- This respository contains the code for the NeurIPS 2024 paper SF-V: Single Forward Video Generation Model.☆99Nov 27, 2024Updated last year
- [ACM MM24] MotionMaster: Training-free Camera Motion Transfer For Video Generation☆99Oct 15, 2024Updated last year
- Code and data for "AnyV2V: A Tuning-Free Framework For Any Video-to-Video Editing Tasks" [TMLR 2024]☆647Oct 29, 2024Updated last year
- Bring portraits to life in Real Time!onnx/tensorrt support!实时肖像驱动!☆1,060Jun 29, 2025Updated 8 months ago
- [IJCV] FoleyCrafter: Bring Silent Videos to Life with Lifelike and Synchronized Sounds. AI拟音大师,给你的无声视频添加生动而且同步的音效 😝☆642Jul 26, 2024Updated last year
- [ICLR 2025] Animate-X - PyTorch Implementation☆305Jan 24, 2025Updated last year
- [CVPR 2025 Highlight] X-Dyna: Expressive Dynamic Human Image Animation☆261Jan 30, 2025Updated last year
- [ICML 2024] MagicPose(also known as MagicDance): Realistic Human Poses and Facial Expressions Retargeting with Identity-aware Diffusion☆777Jul 3, 2024Updated last year