baaivision / Emu3Links
Next-Token Prediction is All You Need
β2,245Updated 7 months ago
Alternatives and similar repositories for Emu3
Users that are interested in Emu3 are comparing it to the libraries listed below
Sorting:
- Autoregressive Model Beats Diffusion: π¦ Llama for Scalable Image Generationβ1,885Updated last year
- [ICLR & NeurIPS 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.β1,765Updated 2 weeks ago
- [CVPR 2025 Oral]Infinity β : Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesisβ1,487Updated 2 weeks ago
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.β1,955Updated last year
- VideoSys: An easy and efficient system for video generationβ2,005Updated 2 months ago
- Emu Series: Generative Multimodal Models from BAAIβ1,754Updated last year
- Anole: An Open, Autoregressive and Native Multimodal Models for Interleaved Image-Text Generationβ810Updated 4 months ago
- [NeurIPS 2025] MMaDA - Open-Sourced Multimodal Large Diffusion Language Modelsβ1,473Updated 3 weeks ago
- [TMLR 2025] Latte: Latent Diffusion Transformer for Video Generation.β1,882Updated last week
- Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAIβ1,257Updated 3 weeks ago
- A novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings.β1,382Updated last month
- Seed1.5-VL, a vision-language foundation model designed to advance general-purpose multimodal understanding and reasoning, achieving statβ¦β1,486Updated 4 months ago
- SEED-Voken: A Series of Powerful Visual Tokenizersβ973Updated 3 weeks ago
- Official implementation of BLIP3o-Seriesβ1,572Updated 2 weeks ago
- InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactionsβ2,900Updated 5 months ago
- A family of lightweight multimodal models.β1,046Updated 11 months ago
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilitiesβ1,089Updated 3 months ago
- [CVPR2024 Highlight] VBench - We Evaluate Video Generationβ1,298Updated 3 weeks ago
- A fork to add multimodal model training to open-r1β1,416Updated 9 months ago
- This repo contains the code for 1D tokenizer and generatorβ1,073Updated 7 months ago
- The official GitHub page for the review paper "Sora: A Review on Background, Technology, Limitations, and Opportunities of Large Vision Mβ¦β504Updated last year
- β4,378Updated last month
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.β2,064Updated last year
- [NeurIPS 2025] An official implementation of Flow-GRPO: Training Flow Matching Models via Online RLβ1,543Updated last week
- γICLR 2024π₯γ Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignmentβ844Updated last year
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)β847Updated last year
- GPT4V-level open-source multi-modal model based on Llama3-8Bβ2,420Updated 8 months ago
- A Framework of Small-scale Large Multimodal Modelsβ914Updated 6 months ago
- PyTorch implementation of MAR+DiffLoss https://arxiv.org/abs/2406.11838β1,782Updated last year
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMsβ1,245Updated 9 months ago