KunpengSong / MoMA-inactive
[inactive] MoMA: Multimodal LLM Adapter for Fast Personalized Image Generation
☆13Updated 9 months ago
Alternatives and similar repositories for MoMA-inactive:
Users that are interested in MoMA-inactive are comparing it to the libraries listed below
- Diffusers Implementation of Controlling Text-to-Image Diffusion by Orthogonal Finetuning☆35Updated last year
- ControlAnimate Library☆48Updated last year
- Omegance: A Single Parameter for Various Granularities in Diffusion-Based Synthesis (arXiv, 2024)☆43Updated 2 months ago
- Unofficial implementation. Stable diffusion model trained by AI Feedback-Based Self-Training Direct Preference Optimization.☆60Updated 11 months ago
- ☆125Updated 4 months ago
- ☆20Updated last year
- ComfyUI Node for FlashFace☆66Updated 8 months ago
- Fine-Grained Subject-Specific Attribute Expression Control in T2I Models☆113Updated 8 months ago
- finetune your florence2 model easy☆20Updated 6 months ago
- ☆43Updated last month
- A detailed diagram laying out the full Flux.1 architecture as shared by Black Forest Labs at https://github.com/black-forest-labs/flux.☆43Updated 4 months ago
- ☆90Updated last year
- ☆62Updated 7 months ago
- Various training scripts used to train bigasp☆76Updated 3 months ago
- ☆13Updated 4 months ago
- ☆54Updated last year
- ☆43Updated last year
- ☆51Updated 7 months ago
- Code release: https://github.com/google/RB-Modulation☆125Updated 5 months ago
- Gradio UI for training video models using finetrainers☆22Updated 2 weeks ago
- Motion Module fine tuner for AnimateDiff.☆79Updated last year
- Custom LORA training on DynamiCrafter☆18Updated 6 months ago
- IP Adapter Instruct☆197Updated 6 months ago
- A retrain of AnimateDiff to be conditional on an init image☆34Updated last year
- Scripts for use with LongCLIP, including fine-tuning Long-CLIP☆55Updated 3 months ago
- Explore how Flux Dev responds when you change the strengths of layers in the model.☆19Updated 5 months ago
- ☆26Updated 11 months ago