Vishnunkumar / clipcrop
Implementations of zero-shot capabilities with Open AI's CLIP and computer vision models
☆28Updated last week
Related projects: ⓘ
- ☆30Updated last year
- Controlling diffusion-based image generation with just a few strokes☆57Updated 9 months ago
- ☆20Updated last month
- Repository with which to explore k-diffusion and diffusers, and within which changes to said packages may be tested.☆54Updated 7 months ago
- ☆24Updated 8 months ago
- cheap views of intermediate Stable Diffusion results☆44Updated last year
- Generate images from an initial frame and text☆37Updated last year
- ☆20Updated this week
- ☆27Updated this week
- ☆14Updated last year
- ☆24Updated last year
- Upscaling Karlo text-to-image generation using Stable Diffusion v2.☆61Updated last year
- Let's try and finetune the OpenAI consistency decoder to work for SDXL☆23Updated 9 months ago
- Apply controlnet to video clips☆71Updated last year
- Make-A-Video Latent Diffusion Model☆18Updated 10 months ago
- A Versatile Face Encoder for Zero-Shot Diffusion Model Personalization☆18Updated this week
- ☆37Updated this week
- ☆41Updated last month
- ☆72Updated last year
- Text-Guided Generation of Full-Body Image with Preserved Reference Face for Customized Animation☆22Updated 2 months ago
- A curve-editor for Stable Diffusion prompt interpolation☆21Updated last year
- Guide diffusion on ImageBind embedding similarity☆27Updated last year
- Implementation of "SCEdit: Efficient and Controllable Image Diffusion Generation via Skip Connection Editing"☆84Updated 8 months ago
- ☆14Updated this week
- ☆28Updated 9 months ago
- An attempt at a SVD inpainting pipeline☆51Updated 8 months ago
- ☆24Updated 3 months ago
- ☆27Updated last year
- Implementation of Grounding DINO & Segment Anything, and it allows masking based on prompt, which is useful for programmed inpainting.☆32Updated 10 months ago
- jupyter/colab implementation of stable-diffusion using k_lms sampler, cpu draw manual seeding, and quantize.py fix☆38Updated 2 years ago