baaivision / Emu3
Next-Token Prediction is All You Need
β1,793Updated 2 weeks ago
Related projects β
Alternatives and complementary repositories for Emu3
- Autoregressive Model Beats Diffusion: π¦ Llama for Scalable Image Generationβ1,305Updated 2 months ago
- Repository for Show-o, One Single Transformer to Unify Multimodal Understanding and Generation.β1,011Updated last week
- β2,837Updated 3 weeks ago
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.β1,753Updated last week
- Anole: An Open, Autoregressive and Native Multimodal Models for Interleaved Image-Text Generationβ675Updated 3 months ago
- Open-source evaluation toolkit of large vision-language models (LVLMs), support 160+ VLMs, 50+ benchmarksβ1,313Updated this week
- Latte: Latent Diffusion Transformer for Video Generation.β1,700Updated last month
- Janus: Decoupling Visual Encoding for Unified Multimodal Understanding and Generationβ917Updated last week
- VideoSys: An easy and efficient system for video generationβ1,759Updated this week
- A family of lightweight multimodal models.β928Updated 3 weeks ago
- Emu Series: Generative Multimodal Models from BAAIβ1,659Updated last month
- VILA - a multi-image visual language model with training, inference and evaluation recipe, deployable from cloud to edge (Jetson Orin andβ¦β1,980Updated last week
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMsβ856Updated last week
- InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Outputβ2,513Updated last month
- β¨β¨VITA: Towards Open-Source Interactive Omni Multimodal LLMβ949Updated 2 weeks ago
- γICLR 2024π₯γ Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignmentβ718Updated 7 months ago
- A novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings.β509Updated last week
- Qwen2-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.β3,000Updated last month
- Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAIβ690Updated last week
- GPT4V-level open-source multi-modal model based on Llama3-8Bβ2,105Updated 2 months ago
- DeepSeek-VL: Towards Real-World Vision-Language Understandingβ2,065Updated 6 months ago
- Open-MAGVIT2: Democratizing Autoregressive Visual Generationβ690Updated last month
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"β668Updated 2 months ago
- MiniSora: A community aims to explore the implementation path and future development direction of Sora.β1,215Updated last month
- Mixture-of-Experts for Large Vision-Language Modelsβ1,975Updated 5 months ago
- Official code for Goldfish model for long video understanding and MiniGPT4-video for short video understandingβ553Updated last month
- Official repository for the paper PLLaVAβ584Updated 3 months ago
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"β852Updated 7 months ago
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.β1,827Updated 3 months ago
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)β728Updated 3 months ago