open-mmlab / Live2DiffLinks
Live2Diff: A Pipeline that processes Live video streams by a uni-directional video Diffusion model.
☆186Updated 11 months ago
Alternatives and similar repositories for Live2Diff
Users that are interested in Live2Diff are comparing it to the libraries listed below
Sorting:
- [CVPR 2025] Consistent and Controllable Image Animation with Motion Diffusion Models☆277Updated last month
- [SIGGRAPH 2024] Motion I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling☆169Updated 8 months ago
- NeurIPS 2024☆384Updated 8 months ago
- [TOG 2024]StyleCrafter: Enhancing Stylized Text-to-Video Generation with Style Adapter☆240Updated 2 months ago
- [AAAI 2025] Follow-Your-Canvas: This repo is the official implementation of "Follow-Your-Canvas: Higher-Resolution Video Outpainting with…☆131Updated 8 months ago
- UniPortrait: A Unified Framework for Identity-Preserving Single- and Multi-Human Image Personalization☆256Updated last month
- InteractiveVideo: User-Centric Controllable Video Generation with Synergistic Multimodal Instructions☆129Updated last year
- Keyframe Interpolation with CogvideoX☆133Updated 7 months ago
- [ECCV 2024] Be-Your-Outpainter https://arxiv.org/abs/2403.13745☆243Updated 2 months ago
- Official implement of ID-Aligner☆121Updated last year
- Video-Infinity generates long videos quickly using multiple GPUs without extra training.☆181Updated 10 months ago
- ☆271Updated 9 months ago
- MoMA: Multimodal LLM Adapter for Fast Personalized Image Generation☆229Updated 11 months ago
- Personalize Anything for Free with Diffusion Transformer☆331Updated 3 months ago
- Code repository for T2V-Turbo and T2V-Turbo-v2☆302Updated 4 months ago
- I2V-Adapter: A General Image-to-Video Adapter for Video Diffusion Models☆205Updated last year
- Official repo for DiffArtist: Towards Structure and Appearance Controllable Image Stylization☆121Updated 2 months ago
- Pusa: Thousands Timesteps Video Diffusion Model☆170Updated 2 weeks ago
- The official implementation of ”RepVideo: Rethinking Cross-Layer Representation for Video Generation“☆117Updated 4 months ago
- Awesome diffusion Video-to-Video (V2V). A collection of paper on diffusion model-based video editing, aka. video-to-video (V2V) translati…☆228Updated 3 weeks ago
- This respository contains the code for the NeurIPS 2024 paper SF-V: Single Forward Video Generation Model.☆97Updated 6 months ago
- CVPR2025:AnimateAnything☆173Updated 2 weeks ago
- [AAAI 2025] Official pytorch implementation of "VideoElevator: Elevating Video Generation Quality with Versatile Text-to-Image Diffusion …☆158Updated last year
- DCM: Dual-Expert Consistency Model for Efficient and High-Quality Video Generation☆155Updated 2 weeks ago
- Official code of "MakeAnything: Harnessing Diffusion Transformers for Multi-Domain Procedural Sequence Generation"☆185Updated 2 months ago
- Official implementation of the ECCV paper "SwapAnything: Enabling Arbitrary Object Swapping in Personalized Visual Editing"☆256Updated 8 months ago
- MoviiGen 1.1: Towards Cinematic-Quality Video Generative Models☆146Updated this week
- [Arxiv 2024] Edicho: Consistent Image Editing in the Wild☆118Updated 5 months ago
- IP Adapter Instruct☆205Updated 10 months ago
- All-round Creator and Editor☆222Updated 5 months ago