lyogavin / train_your_own_soraLinks
☆194Updated last year
Alternatives and similar repositories for train_your_own_sora
Users that are interested in train_your_own_sora are comparing it to the libraries listed below
Sorting:
- Implementation of Lumiere, SOTA text-to-video generation from Google Deepmind, in Pytorch☆280Updated last year
- Official implementation of the ECCV paper "SwapAnything: Enabling Arbitrary Object Swapping in Personalized Visual Editing"☆265Updated last year
- [CVPR 2024] VCoder: Versatile Vision Encoders for Multimodal Large Language Models☆277Updated last year
- InteractiveVideo: User-Centric Controllable Video Generation with Synergistic Multimodal Instructions☆129Updated last year
- ☆206Updated last year
- Video-Infinity generates long videos quickly using multiple GPUs without extra training.☆185Updated last year
- [CVPR2024] Make Your Dream A Vlog☆427Updated 5 months ago
- official implementation of VideoDirectorGPT: Consistent Multi-scene Video Generation via LLM-Guided Planning (COLM 2024)☆175Updated last year
- Live2Diff: A Pipeline that processes Live video streams by a uni-directional video Diffusion model.☆195Updated last year
- Code repository for T2V-Turbo and T2V-Turbo-v2☆303Updated 9 months ago
- MoMA: Multimodal LLM Adapter for Fast Personalized Image Generation☆234Updated last year
- [NeurIPS 2024] VidProM: A Million-scale Real Prompt-Gallery Dataset for Text-to-Video Diffusion Models☆163Updated last year
- KandinskyVideo — multilingual end-to-end text2video latent diffusion model☆182Updated last year
- Implementation of DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing☆226Updated 2 years ago
- [SIGGRAPH Asia 2023] An interactive story visualization tool that support multiple characters☆264Updated last year
- An initiative to replicate Sora☆104Updated last year
- An open source community implementation of the model from the paper: "Movie Gen: A Cast of Media Foundation Models". Join our community …☆58Updated this week
- Retrieval-Augmented Video Generation for Telling a Story☆258Updated last year
- Implementation of the premier Text to Video model from OpenAI☆54Updated 11 months ago
- Code for instruction-tuning Stable Diffusion.☆241Updated last year
- [TMLR23] Official implementation of UnIVAL: Unified Model for Image, Video, Audio and Language Tasks.☆231Updated last year
- [IJCV'24] AutoStory: Generating Diverse Storytelling Images with Minimal Human Effort☆151Updated 11 months ago
- Mini-DALLE3: Interactive Text to Image by Prompting Large Language Models☆313Updated last year
- [ICLR 2024] Code for FreeNoise based on VideoCrafter☆419Updated 2 months ago
- ClickDiffusion: Harnessing LLMs for Interactive Precise Image Editing☆69Updated last year
- Pytorch implementation of MIMO, Controllable Character Video Synthesis with Spatial Decomposed Modeling, from Alibaba Intelligence Group☆136Updated last year
- Official PyTorch implementation for the paper "AnimateZero: Video Diffusion Models are Zero-Shot Image Animators"☆351Updated last year
- Implementation of HyperDreamBooth: HyperNetworks for Fast Personalization of Text-to-Image Models☆173Updated 2 years ago
- faster parallel inference of mochi-1 video generation model☆125Updated 8 months ago
- Data release for the ImageInWords (IIW) paper.☆220Updated 11 months ago