atfortes / Awesome-Controllable-DiffusionLinks
Papers and resources on Controllable Generation using Diffusion Models, including ControlNet, DreamBooth, IP-Adapter.
☆467Updated last month
Alternatives and similar repositories for Awesome-Controllable-Diffusion
Users that are interested in Awesome-Controllable-Diffusion are comparing it to the libraries listed below
Sorting:
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creation☆443Updated 6 months ago
- 🔥🔥🔥 A curated list of papers on LLMs-based multimodal generation (image, video, 3D and audio).☆477Updated 2 months ago
- Research Trends in LLM-guided Multimodal Learning.☆357Updated last year
- Official implementation of SEED-LLaMA (ICLR 2024).☆612Updated 8 months ago
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆580Updated 7 months ago
- 🐟 Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".☆455Updated last year
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imag…☆527Updated last year
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆342Updated 4 months ago
- PyTorch implementation of InstructDiffusion, a unifying and generic framework for aligning computer vision tasks with human instructions.☆429Updated last year
- Aligning LMMs with Factually Augmented RLHF☆363Updated last year
- A list of works on evaluation of visual generation models, including evaluation metrics, models, and systems☆302Updated last week
- 📖 This is a repository for organizing papers, codes and other resources related to unified multimodal models.☆557Updated last month
- Official PyTorch implementation of the paper "In-Context Learning Unlocked for Diffusion Models"☆406Updated last year
- A list for Text-to-Video, Image-to-Video works☆238Updated this week
- ☆334Updated last year
- A collection of resources on controllable generation with text-to-image diffusion models.☆1,046Updated 5 months ago
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆302Updated 4 months ago
- ☆194Updated 11 months ago
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆323Updated 5 months ago
- Multimodal Models in Real World☆510Updated 3 months ago
- A reading list of video generation☆578Updated last week
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆314Updated last year
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey☆443Updated 4 months ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆350Updated last year
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasks☆384Updated 10 months ago
- LLM-grounded Diffusion: Enhancing Prompt Understanding of Text-to-Image Diffusion Models with Large Language Models (LLM-grounded Diffusi…☆473Updated 8 months ago
- Official code of SmartEdit [CVPR-2024 Highlight]☆341Updated 11 months ago
- A Survey on multimodal learning research.☆328Updated last year
- [ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning☆280Updated last year
- This repo lists relevant papers summarized in our survey paper: A Systematic Survey of Prompt Engineering on Vision-Language Foundation …☆466Updated 2 months ago