ByteDance-Seed / BagelLinks
Open-source unified multimodal model
☆5,195Updated 2 months ago
Alternatives and similar repositories for Bagel
Users that are interested in Bagel are comparing it to the libraries listed below
Sorting:
- MAGI-1: Autoregressive Video Generation at Scale☆3,516Updated 4 months ago
- Qwen-Image is a powerful image generation foundation model capable of complex text rendering and precise image editing.☆5,762Updated 3 weeks ago
- [ICCV 2025] Official implementations for paper: VACE: All-in-One Video Creation and Editing☆3,350Updated last week
- A unified inference and post-training framework for accelerated video generation.☆2,440Updated this week
- A SOTA open-source image editing model, which aims to provide comparable performance against the closed-source models like GPT-4o and Gem…☆1,688Updated last month
- OmniGen2: Exploration to Advanced Multimodal Generation.☆3,902Updated 3 weeks ago
- ☆2,442Updated 3 months ago
- HunyuanVideo-I2V: A Customizable Image-to-Video Model based on HunyuanVideo☆1,701Updated 5 months ago
- Seed1.5-VL, a vision-language foundation model designed to advance general-purpose multimodal understanding and reasoning, achieving stat…☆1,472Updated 4 months ago
- The best OSS video generation models, created by Genmo☆3,471Updated last month
- SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer☆4,613Updated this week
- ☆3,122Updated 7 months ago
- HunyuanImage-3.0: A Powerful Native Multimodal Model for Image Generation☆2,250Updated last week
- [NeurIPS 2025] MMaDA - Open-Sourced Multimodal Large Diffusion Language Models☆1,445Updated last week
- Qwen2.5-Omni is an end-to-end multimodal model by Qwen team at Alibaba Cloud, capable of understanding text, audio, vision, video, and pe…☆3,745Updated 4 months ago
- GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning☆1,710Updated last week
- The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attention☆3,207Updated 3 months ago
- Next-Token Prediction is All You Need☆2,216Updated 7 months ago
- Qwen3-omni is a natively end-to-end, omni-modal LLM developed by the Qwen team at Alibaba Cloud, capable of understanding text, audio, im…☆2,699Updated 2 weeks ago
- Official PyTorch implementation of One-Minute Video Generation with Test-Time Training☆2,257Updated 4 months ago
- Official implementation of BLIP3o-Series☆1,536Updated this week
- [ICCV 2025] Implementation for Describe Anything: Detailed Localized Image and Video Captioning☆1,365Updated 4 months ago
- A novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings.☆1,373Updated last month
- [ICCV 2025] 🔥🔥 UNO: A Universal Customization Method for Both Single and Multi-Subject Conditioning☆1,315Updated last month
- [NeurIPS 2025] Image editing is worth a single LoRA! 0.1% training data for fantastic image editing! Surpasses GPT-4o in ID persistence~ …☆1,998Updated last week
- [ICCV 2025 Highlight] OminiControl: Minimal and Universal Control for Diffusion Transformer☆1,804Updated 3 months ago
- Generating Immersive, Explorable, and Interactive 3D Worlds from Words or Pixels with Hunyuan3D World Model☆2,295Updated this week
- HunyuanCustom: A Multimodal-Driven Architecture for Customized Video Generation☆1,187Updated last week
- [CVPR 2025 Oral]Infinity ∞ : Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis☆1,471Updated 4 months ago
- [CVPR2025 Highlight] Video Generation Foundation Models: https://saiyan-world.github.io/goku/☆2,895Updated 8 months ago