HFAiLab / clip-gen
CLIP-GEN: Language-Free Training of a Text-to-Image Generator with CLIP
☆134Updated 2 years ago
Alternatives and similar repositories for clip-gen:
Users that are interested in clip-gen are comparing it to the libraries listed below
- ☆92Updated last year
- Code for paper LAFITE: Towards Language-Free Training for Text-to-Image Generation (CVPR 2022)☆182Updated last year
- Official Pytorch Implementation of Synthesizing Coherent Story with Auto-Regressive Latent Diffusion Models☆195Updated last year
- This respository contains the code for the CVPR 2023 paper SINE: SINgle Image Editing with Text-to-Image Diffusion Models.☆184Updated last year
- Research code for paper "Frido: Feature Pyramid Diffusion for Complex Scene Image Synthesis"☆113Updated 2 months ago
- An in-context conditioning version of MUSE with pre-trained checkpoints.☆111Updated last year
- ☆47Updated 9 months ago
- ACM MM'23 (oral), SUR-adapter for pre-trained diffusion models can acquire the powerful semantic understanding and reasoning capabilities…☆118Updated 8 months ago
- Simple script to compute CLIP-based scores given a DALL-e trained model.☆30Updated 3 years ago
- ☆97Updated 8 months ago
- code for CLIPDraw☆130Updated 2 years ago
- [NeurIPS 2022] (Amortized) distributional control for pre-trained generative models☆119Updated last year