hysts / CogView2_demo
Unofficial demo app for CogView2
☆15Updated 2 years ago
Alternatives and similar repositories for CogView2_demo:
Users that are interested in CogView2_demo are comparing it to the libraries listed below
- Implementations of zero-shot capabilities with Open AI's CLIP and computer vision models☆32Updated 5 months ago
- ☆58Updated 2 years ago
- openai guided diffusion tweaks☆52Updated 2 years ago
- Official PyTorch implementation of StyleGAN3☆25Updated 3 years ago
- Make-A-Video Latent Diffusion Model☆18Updated last year
- cheap views of intermediate Stable Diffusion results☆45Updated 2 years ago
- ☆30Updated 2 years ago
- checkpoints for glide finetuned on laion and other datasets. wip.☆50Updated 2 years ago
- Unified API to facilitate usage of pre-trained "perceptor" models, a la CLIP☆39Updated 2 years ago
- Script and models for clustering LAION-400m CLIP embeddings.☆25Updated 3 years ago
- scripts for running and training imagen-pytorch☆38Updated 2 years ago
- A Real-ESRGAN equipped Colab notebook for CLIP Guided Diffusion☆83Updated 2 years ago
- ☆18Updated 2 years ago
- ☆25Updated 2 years ago
- code for paper "Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion" in the conference of IJCAI 2021☆8Updated 3 years ago
- Experimental CartoonGAN (Chen et.al.) implementation for quicker background generation for posters and new episodes☆48Updated 2 years ago
- Repo for our ECCV 2022 paper on "Paint2Pix: Interactive Painting based Progressive Image Synthesis and Editing"☆121Updated 2 years ago
- code for CLIPDraw☆131Updated 2 years ago
- Colab notebook for openai/glide-text2im.☆21Updated last year
- Generate images from texts. In Russian☆19Updated 3 years ago
- A curve-editor for Stable Diffusion prompt interpolation☆21Updated 2 years ago
- Training simple models to predict CLIP image embeddings from text embeddings, and vice versa.☆60Updated 2 years ago
- Jupyter Notebooks for experimenting with negative prompting with Stable Diffusion 2.0.☆88Updated 2 years ago
- Implementation of Transframer, Deepmind's U-net + Transformer architecture for up to 30 seconds video generation, in Pytorch☆69Updated 2 years ago
- 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch☆50Updated last month
- Cross-platform, customizable ML solutions for live and streaming media.☆24Updated 3 years ago
- Search Images through image dataset with text prompt using Open AI's CLIP neural network.☆33Updated 3 years ago
- Repository with which to explore k-diffusion and diffusers, and within which changes to said packages may be tested.☆54Updated last year
- Optimized library for large-scale extraction of frames and audio from video.☆202Updated last year
- extending stable diffusion prompts with suitable style cues using text generation☆176Updated 2 years ago