atfortes / Awesome-Controllable-DiffusionLinks
Papers and resources on Controllable Generation using Diffusion Models, including ControlNet, DreamBooth, IP-Adapter.
β486Updated 2 months ago
Alternatives and similar repositories for Awesome-Controllable-Diffusion
Users that are interested in Awesome-Controllable-Diffusion are comparing it to the libraries listed below
Sorting:
- π₯π₯π₯ A curated list of papers on LLMs-based multimodal generation (image, video, 3D and audio).β507Updated 5 months ago
- Official implementation of SEED-LLaMA (ICLR 2024).β621Updated 11 months ago
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creationβ456Updated 9 months ago
- π This is a repository for organizing papers, codes and other resources related to unified multimodal models.β681Updated last month
- π Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".β464Updated last year
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Contentβ590Updated 11 months ago
- Research Trends in LLM-guided Multimodal Learning.β355Updated last year
- A list of works on evaluation of visual generation models, including evaluation metrics, models, and systemsβ358Updated this week
- Aligning LMMs with Factually Augmented RLHFβ375Updated last year
- Awesome_Multimodel is a curated GitHub repository that provides a comprehensive collection of resources for Multimodal Large Language Modβ¦β341Updated 5 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.β348Updated 8 months ago
- LLM-grounded Diffusion: Enhancing Prompt Understanding of Text-to-Image Diffusion Models with Large Language Models (LLM-grounded Diffusiβ¦β476Updated last year
- β350Updated last year
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imagβ¦β535Updated last year
- Official PyTorch implementation of the paper "In-Context Learning Unlocked for Diffusion Models"β412Updated last year
- A collection of resources on controllable generation with text-to-image diffusion models.β1,075Updated 8 months ago
- Multimodal Models in Real Worldβ540Updated 6 months ago
- A reading list of video generationβ614Updated this week
- β¨β¨Woodpecker: Hallucination Correction for Multimodal Large Language Modelsβ639Updated 8 months ago
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Surveyβ447Updated 7 months ago
- Recent LLM-based CV and related works. Welcome to comment/contribute!β872Updated 6 months ago
- [CVPR 2024] Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachersβ621Updated 10 months ago
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasksβ389Updated last year
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKUβ354Updated last year
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR 2024]β227Updated 5 months ago
- β211Updated last year
- This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Eβ¦β491Updated 3 months ago
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)β307Updated 7 months ago
- Codes for Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Modelsβ255Updated last month
- This repo lists relevant papers summarized in our survey paper: A Systematic Survey of Prompt Engineering on Vision-Language Foundation β¦β482Updated 5 months ago