MM-Interleaved: Interleaved Image-Text Generative Modeling via Multi-modal Feature Synchronizer
☆252Apr 3, 2024Updated 2 years ago
Alternatives and similar repositories for MM-Interleaved
Users that are interested in MM-Interleaved are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Official implementation of SEED-LLaMA (ICLR 2024).☆641Sep 21, 2024Updated last year
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆605Oct 6, 2024Updated last year
- 🐟 Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".☆471Jan 19, 2024Updated 2 years ago
- Emu Series: Generative Multimodal Models from BAAI☆1,772Jan 12, 2026Updated 3 months ago
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creation☆460Dec 2, 2024Updated last year
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆415May 5, 2025Updated 11 months ago
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"☆865May 8, 2025Updated 11 months ago
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of …☆506Aug 9, 2024Updated last year
- Anole: An Open, Autoregressive and Native Multimodal Models for Interleaved Image-Text Generation☆833Jun 16, 2025Updated 9 months ago
- InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions☆2,923May 26, 2025Updated 10 months ago
- VL-GPT: A Generative Pre-trained Transformer for Vision and Language Understanding and Generation☆86Sep 12, 2024Updated last year
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆248Aug 14, 2024Updated last year
- ☆134Dec 22, 2023Updated 2 years ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆360Dec 18, 2023Updated 2 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"☆269Jun 12, 2024Updated last year
- [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型☆9,949Sep 22, 2025Updated 6 months ago
- Learning 1D Causal Visual Representation with De-focus Attention Networks☆35Jun 7, 2024Updated last year
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,992Nov 7, 2025Updated 5 months ago
- ☆360Jan 27, 2024Updated 2 years ago
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆159Dec 6, 2024Updated last year
- Official repo for "Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models"☆3,328May 4, 2024Updated last year
- Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks☆4,013Updated this week
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆363Jan 14, 2025Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- [CVPR 2025 Highlight] Official repository for CoMM Dataset☆52Dec 31, 2024Updated last year
- [ICLR & NeurIPS 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,909Jan 8, 2026Updated 3 months ago
- Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation☆1,941Aug 15, 2024Updated last year
- Aligning LMMs with Factually Augmented RLHF☆394Nov 1, 2023Updated 2 years ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆767Feb 1, 2024Updated 2 years ago
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.☆2,094Jul 29, 2024Updated last year
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆368Jul 24, 2025Updated 8 months ago
- Multimodal Models in Real World☆557Feb 24, 2025Updated last year
- A family of lightweight multimodal models.☆1,053Nov 18, 2024Updated last year
- NordVPN Threat Protection Pro™ • AdTake your cybersecurity to the next level. Block phishing, malware, trackers, and ads. Lightweight app that works with all browsers.
- [CVPR 2024] CapsFusion: Rethinking Image-Text Data at Scale☆214Feb 27, 2024Updated 2 years ago
- MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text.☆954Mar 19, 2025Updated last year
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆62Nov 7, 2024Updated last year
- [CVPR 2024] Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachers☆684Oct 25, 2024Updated last year
- LLaVA-UHD v3: Progressive Visual Compression for Efficient Native-Resolution Encoding in MLLMs☆419Dec 20, 2025Updated 3 months ago
- ☆402Dec 12, 2024Updated last year
- [ICLR 2025] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation☆421Apr 25, 2025Updated 11 months ago