MiniMax-AI / MiniMax-01
The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attention
☆2,460Updated 2 weeks ago
Alternatives and similar repositories for MiniMax-01:
Users that are interested in MiniMax-01 are comparing it to the libraries listed below
- MoBA: Mixture of Block Attention for Long-Context LLMs☆1,696Updated 3 weeks ago
- Qwen2.5-Omni is an end-to-end multimodal model by Qwen team at Alibaba Cloud, capable of understanding text, audio, vision, video, and pe…☆2,049Updated this week
- A bidirectional pipeline parallelism algorithm for computation-communication overlap in V3/R1 training.☆2,676Updated 3 weeks ago
- An Open Large Reasoning Model for Real-World Solutions☆1,481Updated last month
- Democratizing Reinforcement Learning for LLMs☆2,158Updated last month
- Muon is Scalable for LLM Training☆993Updated last week
- Scalable RL solution for advanced reasoning of language models☆1,445Updated 2 weeks ago
- Fully open data curation for reasoning models☆1,591Updated 2 weeks ago
- Official Repo for Open-Reasoner-Zero☆1,687Updated 3 weeks ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆5,109Updated last week
- verl: Volcano Engine Reinforcement Learning for LLMs☆5,994Updated this week
- Search-R1: An Efficient, Scalable RL Training Framework for Reasoning & Search Engine Calling interleaved LLM based on veRL☆1,466Updated last week
- Witness the aha moment of VLM with less than $3.☆3,430Updated last month
- Expert Parallelism Load Balancer☆1,108Updated last week
- A live stream development of RL tunning for LLM agents☆2,031Updated last week
- ☆3,249Updated 3 weeks ago
- An Open-source RL System from ByteDance Seed and Tsinghua AIR☆915Updated last week
- ☆1,348Updated 4 months ago
- ☆2,734Updated 2 weeks ago
- Simple RL training for reasoning☆3,326Updated this week
- RAGEN leverages reinforcement learning to train LLM reasoning agents in interactive, stochastic environments.☆1,265Updated last week
- ☆766Updated last week
- Analyze computation-communication overlap in V3/R1.☆970Updated 2 weeks ago
- Sky-T1: Train your own O1 preview model within $450☆3,167Updated last week
- DeepEP: an efficient expert-parallel communication library☆7,329Updated last week
- Official PyTorch implementation for "Large Language Diffusion Models"☆1,350Updated 3 weeks ago
- EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework based on veRL☆1,764Updated last week
- [CVPR 2025] Magma: A Foundation Model for Multimodal AI Agents☆1,513Updated this week
- High-performance inference framework for large language models, focusing on efficiency, flexibility, and availability.☆1,053Updated this week
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,620Updated last year