MiniMax-AI / awesome-minimax-integrationsLinks
Explore these applications integrating MiniMax's multimodal API to see how text, vision, and speech processing capabilities are incorporated into various software.
☆64Updated last week
Alternatives and similar repositories for awesome-minimax-integrations
Users that are interested in awesome-minimax-integrations are comparing it to the libraries listed below
Sorting:
- VeOmni: Scaling Any Modality Model Training with Model-Centric Distributed Recipe Zoo☆1,620Updated this week
- ☆814Updated 8 months ago
- ☆449Updated 6 months ago
- The official repo of One RL to See Them All: Visual Triple Unified Reinforcement Learning☆331Updated 8 months ago
- ☆1,300Updated last week
- Fast, Sharp & Reliable Agentic Intelligence☆1,167Updated this week
- [ICLR 2026] On the Generalization of SFT: A Reinforcement Learning Perspective with Reward Rectification.☆532Updated last month
- ☆1,475Updated 2 months ago
- Discrete Diffusion Forcing (D2F): dLLMs Can Do Faster-Than-AR Inference☆241Updated last week
- ☆209Updated 3 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆631Updated this week
- The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attention☆3,328Updated 7 months ago
- d3LLM: Ultra-Fast Diffusion LLM 🚀☆90Updated last week
- Official implementation of "Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding"☆825Updated 2 weeks ago
- Training library for Megatron-based models with bidirectional Hugging Face conversion capability☆419Updated this week
- Block Diffusion for Ultra-Fast Speculative Decoding☆459Updated last week
- Muon is Scalable for LLM Training☆1,426Updated 6 months ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆641Updated 3 weeks ago
- Ming - facilitating advanced multimodal understanding and generation capabilities built upon the Ling LLM.☆580Updated 3 months ago
- ☆142Updated 3 weeks ago
- Nex General Agentic Data Pipeline, an end-to-end pipeline for generating high-quality agentic training data.☆30Updated 2 months ago
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆964Updated this week
- The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.☆1,524Updated last month
- The most open diffusion language model for code generation — releasing pretraining, evaluation, inference, and checkpoints.☆519Updated 3 months ago
- Speed Always Wins: A Survey on Efficient Architectures for Large Language Models☆394Updated 3 months ago
- MiniMax M2.1, a SOTA model for real-world dev & agents.☆491Updated 2 weeks ago
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilities☆1,156Updated 6 months ago
- ☆580Updated 3 weeks ago
- slime is an LLM post-training framework for RL Scaling.☆3,668Updated this week
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆468Updated 8 months ago