haofanwang / T2I-Adapter-for-DiffusersLinks
Transfer the T2I-Adapter with any basemodel in diffusersπ₯
β136Updated 2 years ago
Alternatives and similar repositories for T2I-Adapter-for-Diffusers
Users that are interested in T2I-Adapter-for-Diffusers are comparing it to the libraries listed below
Sorting:
- A simple extension of Controlnet for color conditionβ89Updated last year
- Implementation of HyperDreamBooth: HyperNetworks for Fast Personalization of Text-to-Image Modelsβ173Updated 2 years ago
- AnimateDiff I2V version.β185Updated last year
- Proof of concept for control landmarks in diffusion models!β89Updated 2 years ago
- Mixture of Diffusers for scene composition and high resolution image generationβ446Updated 2 years ago
- β183Updated 2 years ago
- Stable Diffusion-based image manipulation method with a sketch and reference imageβ182Updated 2 years ago
- AnimationDiff with trainβ122Updated last year
- Official Implementation of 'Inserting Anybody in Diffusion Models via Celeb Basis'β254Updated 2 years ago
- β90Updated last year
- implementation of the IPAdapter models for HF Diffusersβ177Updated 2 years ago
- Forked version of AnimateDiff, attempts to add init images. If you are look into original repo, please go to https://github.com/guoyww/aβ¦β152Updated 2 years ago
- β117Updated 3 years ago
- β71Updated 2 years ago
- Code for Shifted Diffusion for Text-to-image Generation (CVPR 2023)β161Updated 2 years ago
- Implementation of DiffusionOverDiffusion architecture presented in NUWA-XL in a form of ControlNet-like module on top of ModelScope text2β¦β85Updated 2 years ago
- A diffusers based implementation of HyperDreamBoothβ136Updated 2 years ago
- [CVPR 2023] Specialist Diffusion: Extremely Low-Shot Fine-Tuning of Large Diffusion Modelsβ38Updated 2 years ago
- Implementation of Encoder-based Domain Tuning for Fast Personalization of Text-to-Image Modelsβ323Updated 2 years ago
- This is an unofficial PyTorch implementation of StyleDrop: Text-to-Image Generation in Any Style.β223Updated 2 years ago
- [Arxiv 2023] img2img version of stable diffusion. Line Art Automatic Coloring. Anime Character Remix. Style Transfer.β148Updated 4 months ago
- We show you how to train a ControlNet with your own control hint in diffusers frameworkβ60Updated 2 years ago
- Official Repository of the paper "Trajectory Consistency Distillation"β355Updated last year
- β114Updated 2 years ago
- β108Updated 3 years ago
- Official implementation for "ControlVideo: Adding Conditional Control for One Shot Text-to-Video Editing"β231Updated 2 years ago
- Official Pytorch Implementation for "VideoControlNet: A Motion-Guided Video-to-Video Translation Framework by Using Diffusion Model with β¦β119Updated 2 years ago
- Official Implementation for "ConceptLab: Creative Generation using Diffusion Prior Constraints"β253Updated last year
- Textual Inversion for DeepFloyd IFβ60Updated 2 years ago
- Implementation of DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editingβ226Updated 2 years ago