Multimodal Open-O1 (MO1) is designed to enhance the accuracy of inference models by utilizing a novel prompt-based approach. This tool works locally and aims to create inference chains akin to those used by OpenAI-o1, but with localized processing power.
☆29Sep 25, 2024Updated last year
Alternatives and similar repositories for Multimodal-Open-O1
Users that are interested in Multimodal-Open-O1 are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- RelayGS: Reconstructing Dynamic Scenes with Large-Scale and Complex Motions via Relay Gaussians☆13Dec 5, 2024Updated last year
- [IJCV 2025] OmniDrag: Enabling Motion Control for Omnidirectional Image-to-Video Generation☆15Feb 13, 2026Updated 2 months ago
- [ICLR 2025] Permute-and-Flip: An optimally robust and watermarkable decoder for LLMs☆19Mar 20, 2025Updated last year
- [ACL2026] Uni-MMMU : A Massive Multi-discipline Multimodal Unified Benchmark☆23Feb 13, 2026Updated 2 months ago
- A multimodal large-scale model, which performs close to the closed-source Qwen-VL-PLUS on many datasets and significantly surpasses the p…☆14Feb 5, 2024Updated 2 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Code for paper OmniSSR☆25Apr 21, 2025Updated 11 months ago
- A lightweight flexible Video-MLLM developed by TencentQQ Multimedia Research Team.