JIA-Lab-research / MGMLinks
Official repo for "Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models"
โ3,334Updated last year
Alternatives and similar repositories for MGM
Users that are interested in MGM are comparing it to the libraries listed below
Sorting:
- ใTMM 2025๐ฅใ Mixture-of-Experts for Large Vision-Language Modelsโ2,300Updated 6 months ago
- GPT4V-level open-source multi-modal model based on Llama3-8Bโ2,431Updated 11 months ago
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.โ1,985Updated 3 months ago
- ใEMNLP 2024๐ฅใVideo-LLaVA: Learning United Visual Representation by Alignment Before Projectionโ3,448Updated last year
- MiniSora: A community aims to explore the implementation path and future development direction of Sora.โ1,280Updated 11 months ago
- InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactionsโ2,921Updated 8 months ago
- Mora: More like Sora for Generalist Video Generationโ1,584Updated last year
- โ4,552Updated 4 months ago
- Emu Series: Generative Multimodal Models from BAAIโ1,764Updated 3 weeks ago
- โ1,841Updated last year
- Official repo for VGen: a holistic video generation ecosystem for video generation building on diffusion modelsโ3,153Updated last year
- A Next-Generation Training Engine Built for Ultra-Large MoE Modelsโ5,082Updated this week
- The official repo of Qwen-VL (้ไนๅ้ฎ-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.โ6,524Updated last year
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clouโฆโ3,737Updated 2 months ago
- a state-of-the-art-level open visual language model | ๅคๆจกๆ้ข่ฎญ็ปๆจกๅโ6,724Updated last year
- [ICML 2024] Mastering Text-to-Image Diffusion: Recaptioning, Planning, and Generating with Multimodal LLMs (RPG)โ1,843Updated last year
- Next-Token Prediction is All You Needโ2,339Updated 3 weeks ago
- Code and models for ICML 2024 paper, NExT-GPT: Any-to-Any Multimodal Large Language Modelโ3,615Updated 8 months ago
- Lumina-T2X is a unified framework for Text to Any Modality Generationโ2,251Updated 11 months ago
- [TMLR 2025] Latte: Latent Diffusion Transformer for Video Generation.โ1,917Updated 3 months ago
- official repository of aiXcoder-7B Code Large Language Modelโ2,273Updated 7 months ago
- A family of lightweight multimodal models.โ1,050Updated last year
- Large World Model -- Modeling Text and Video with Millions Contextโ7,393Updated last year
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.โ2,085Updated last year
- PyTorch code and models for V-JEPA self-supervised learning from video.โ3,499Updated 11 months ago
- TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbonesโ1,307Updated this week
- Official implementations for paper: Anydoor: zero-shot object-level image customizationโ4,217Updated last year
- Strong and Open Vision Language Assistant for Mobile Devicesโ1,330Updated last year
- ๐ฅ๐ฅ LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)โ849Updated 6 months ago
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"โ864Updated 9 months ago