ByteDance-Seed / BagelLinks
Open-source unified multimodal model
☆5,409Updated last month
Alternatives and similar repositories for Bagel
Users that are interested in Bagel are comparing it to the libraries listed below
Sorting:
- MAGI-1: Autoregressive Video Generation at Scale☆3,576Updated 5 months ago
- Qwen-Image is a powerful image generation foundation model capable of complex text rendering and precise image editing.☆6,290Updated 3 weeks ago
- A unified inference and post-training framework for accelerated video generation.☆2,693Updated last week
- OmniGen2: Exploration to Advanced Multimodal Generation.☆3,951Updated last week
- [ICCV 2025] Official implementations for paper: VACE: All-in-One Video Creation and Editing☆3,454Updated last month
- SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer☆4,773Updated this week
- ☆2,478Updated 4 months ago
- HunyuanVideo-I2V: A Customizable Image-to-Video Model based on HunyuanVideo☆1,740Updated 6 months ago
- Seed1.5-VL, a vision-language foundation model designed to advance general-purpose multimodal understanding and reasoning, achieving stat…☆1,507Updated 5 months ago
- HunyuanImage-3.0: A Powerful Native Multimodal Model for Image Generation☆2,536Updated last month
- Qwen2.5-Omni is an end-to-end multimodal model by Qwen team at Alibaba Cloud, capable of understanding text, audio, vision, video, and pe…☆3,828Updated 5 months ago
- [NeurIPS 2025] Image editing is worth a single LoRA! 0.1% training data for fantastic image editing! Surpasses GPT-4o in ID persistence~ …☆2,038Updated 3 weeks ago
- [ICCV 2025] Implementation for Describe Anything: Detailed Localized Image and Video Captioning☆1,417Updated 5 months ago
- GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning☆1,761Updated last month
- Qwen3-omni is a natively end-to-end, omni-modal LLM developed by the Qwen team at Alibaba Cloud, capable of understanding text, audio, im…☆3,018Updated 2 months ago
- The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attention☆3,248Updated 5 months ago
- A SOTA open-source image editing model, which aims to provide comparable performance against the closed-source models like GPT-4o and Gem…☆1,878Updated last week
- Generating Immersive, Explorable, and Interactive 3D Worlds from Words or Pixels with Hunyuan3D World Model☆2,479Updated last month
- ☆3,137Updated 8 months ago
- [ICCV 2025] 🔥🔥 UNO: A Universal Customization Method for Both Single and Multi-Subject Conditioning☆1,335Updated 2 months ago
- HunyuanCustom: A Multimodal-Driven Architecture for Customized Video Generation☆1,194Updated last month
- [NeurIPS 2025] MMaDA - Open-Sourced Multimodal Large Diffusion Language Models☆1,516Updated 3 weeks ago
- Next-Token Prediction is All You Need☆2,257Updated 2 weeks ago
- [CVPR2025 Highlight] Video Generation Foundation Models: https://saiyan-world.github.io/goku/☆2,908Updated 9 months ago
- Phantom: Subject-Consistent Video Generation via Cross-Modal Alignment☆1,458Updated 2 months ago
- CogView4, CogView3-Plus and CogView3(ECCV 2024)☆1,093Updated 8 months ago
- Official implementation of BLIP3o-Series☆1,593Updated last week
- [ICCV 2025 Highlight] OminiControl: Minimal and Universal Control for Diffusion Transformer☆1,842Updated 5 months ago
- ☆1,948Updated last month
- Scalable and memory-optimized training of diffusion models☆1,307Updated 6 months ago