dvlab-research / MGMLinks
Official repo for "Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models"
โ3,326Updated last year
Alternatives and similar repositories for MGM
Users that are interested in MGM are comparing it to the libraries listed below
Sorting:
- ใTMM 2025๐ฅใ Mixture-of-Experts for Large Vision-Language Modelsโ2,270Updated 3 months ago
- MiniSora: A community aims to explore the implementation path and future development direction of Sora.โ1,267Updated 8 months ago
- ใEMNLP 2024๐ฅใVideo-LLaVA: Learning United Visual Representation by Alignment Before Projectionโ3,393Updated 11 months ago
- Mora: More like Sora for Generalist Video Generationโ1,578Updated last year
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.โ1,964Updated last week
- GPT4V-level open-source multi-modal model based on Llama3-8Bโ2,420Updated 8 months ago
- โ4,378Updated 2 months ago
- [TMLR 2025] Latte: Latent Diffusion Transformer for Video Generation.โ1,883Updated 2 weeks ago
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.โ2,065Updated last year
- InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactionsโ2,900Updated 5 months ago
- A Next-Generation Training Engine Built for Ultra-Large MoE Modelsโ4,969Updated this week
- Large World Model -- Modeling Text and Video with Millions Contextโ7,365Updated last year
- Official repo for VGen: a holistic video generation ecosystem for video generation building on diffusion modelsโ3,145Updated 10 months ago
- Emu Series: Generative Multimodal Models from BAAIโ1,754Updated last year
- [ICML 2024] Mastering Text-to-Image Diffusion: Recaptioning, Planning, and Generating with Multimodal LLMs (RPG)โ1,828Updated 9 months ago
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clouโฆโ3,644Updated 3 weeks ago
- official repository of aiXcoder-7B Code Large Language Modelโ2,275Updated 4 months ago
- a state-of-the-art-level open visual language model | ๅคๆจกๆ้ข่ฎญ็ปๆจกๅโ6,689Updated last year
- Lumina-T2X is a unified framework for Text to Any Modality Generationโ2,235Updated 8 months ago
- PyTorch code and models for V-JEPA self-supervised learning from video.โ3,258Updated 8 months ago
- A family of lightweight multimodal models.โ1,046Updated 11 months ago
- โ1,840Updated last year
- TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbonesโ1,303Updated last year
- DeepSeek-VL: Towards Real-World Vision-Language Understandingโ4,006Updated last year
- ๐ฅ๐ฅ LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)โ843Updated 3 months ago
- Next-Token Prediction is All You Needโ2,251Updated 7 months ago
- Reaching LLaMA2 Performance with 0.1M Dollarsโ987Updated last year
- The official PyTorch implementation of Google's Gemma modelsโ5,570Updated 5 months ago
- Code and models for ICML 2024 paper, NExT-GPT: Any-to-Any Multimodal Large Language Modelโ3,579Updated 6 months ago
- This project aim to reproduce Sora (Open AI T2V model), we wish the open source community contribute to this project.โ12,068Updated 2 weeks ago