showlab / Awesome-Unified-Multimodal-Models
π This is a repository for organizing papers, codes and other resources related to unified multimodal models.
β328Updated 3 weeks ago
Alternatives and similar repositories for Awesome-Unified-Multimodal-Models:
Users that are interested in Awesome-Unified-Multimodal-Models are comparing it to the libraries listed below
- π₯ Official impl. of "TokenFlow: Unified Image Tokenizer for Multimodal Understanding and Generation".β222Updated 2 weeks ago
- The paper collections for the autoregressive models in vision.β368Updated this week
- Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Surveyβ274Updated this week
- [NeurIPS'24 Spotlight] EVE: Encoder-Free Vision-Language Modelsβ261Updated 3 months ago
- VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generationβ199Updated this week
- β¨β¨Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysisβ442Updated last month
- π₯π₯π₯ A curated list of papers on LLMs-based multimodal generation (image, video, 3D and audio).β406Updated 3 weeks ago
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Contentβ550Updated 3 months ago
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creationβ409Updated last month
- OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Textβ302Updated 2 months ago
- [ECCV 2024] ShareGPT4V: Improving Large Multi-modal Models with Better Captionsβ187Updated 6 months ago
- Long Context Transfer from Language to Vision