MiniMax-M1, the world's first open-weight, large-scale hybrid-attention reasoning model.
☆3,103Jul 7, 2025Updated 7 months ago
Alternatives and similar repositories for MiniMax-M1
Users that are interested in MiniMax-M1 are comparing it to the libraries listed below
Sorting:
- The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attention☆3,347Jul 7, 2025Updated 7 months ago
- MiMo: Unlocking the Reasoning Potential of Language Model – From Pretraining to Posttraining☆1,935Jun 5, 2025Updated 8 months ago
- Kimi K2 is the large language model series developed by Moonshot AI team☆10,415Jan 21, 2026Updated last month
- Qwen3 is the large language model series developed by Qwen team, Alibaba Cloud.☆26,713Jan 9, 2026Updated last month
- Muon is Scalable for LLM Training☆1,440Aug 3, 2025Updated 6 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆71,234Updated this week
- GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models☆4,234Feb 1, 2026Updated last month
- Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama, Gemma, TTS 2x faster with 70% less VRAM.