atfortes / Awesome-Controllable-DiffusionLinks
Papers and resources on Controllable Generation using Diffusion Models, including ControlNet, DreamBooth, IP-Adapter.
☆482Updated last month
Alternatives and similar repositories for Awesome-Controllable-Diffusion
Users that are interested in Awesome-Controllable-Diffusion are comparing it to the libraries listed below
Sorting:
- 🔥🔥🔥 A curated list of papers on LLMs-based multimodal generation (image, video, 3D and audio).☆495Updated 4 months ago
- Official implementation of SEED-LLaMA (ICLR 2024).☆619Updated 10 months ago
- 🐟 Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".☆459Updated last year
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creation☆453Updated 8 months ago
- 📖 This is a repository for organizing papers, codes and other resources related to unified multimodal models.☆646Updated last week
- A list of works on evaluation of visual generation models, including evaluation metrics, models, and systems☆334Updated last week
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆585Updated 10 months ago
- LLM-grounded Diffusion: Enhancing Prompt Understanding of Text-to-Image Diffusion Models with Large Language Models (LLM-grounded Diffusi…☆476Updated 11 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆346Updated 6 months ago
- Aligning LMMs with Factually Augmented RLHF☆370Updated last year
- Research Trends in LLM-guided Multimodal Learning.☆357Updated last year
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR 2024]☆223Updated 4 months ago
- Awesome_Multimodel is a curated GitHub repository that provides a comprehensive collection of resources for Multimodal Large Language Mod…☆339Updated 4 months ago
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imag…☆533Updated last year
- Multimodal Models in Real World☆532Updated 5 months ago
- A reading list of video generation☆607Updated 2 weeks ago
- [TMLR 2025🔥] A survey for the autoregressive models in vision.☆665Updated last week
- PyTorch implementation of InstructDiffusion, a unifying and generic framework for aligning computer vision tasks with human instructions.☆435Updated last year
- Recent LLM-based CV and related works. Welcome to comment/contribute!☆870Updated 5 months ago
- ☆344Updated last year
- A collection of resources on controllable generation with text-to-image diffusion models.☆1,064Updated 7 months ago
- ☆621Updated last year
- A list for Text-to-Video, Image-to-Video works☆241Updated 2 months ago
- [CVPR 2024] Intelligent Grimm - Open-ended Visual Storytelling via Latent Diffusion Models☆250Updated 8 months ago
- Diffusion Model-Based Image Editing: A Survey (TPAMI 2025)☆649Updated 3 weeks ago
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasks☆386Updated last year
- [CVPR 2024] Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachers☆613Updated 9 months ago
- [NeurIPS 2023] Official implementations of "Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models"☆521Updated last year
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆352Updated last year
- Awesome Unified Multimodal Models☆513Updated last month