MiniMax-AI / awesome-minimax-integrationsLinks
Explore these applications integrating MiniMax's multimodal API to see how text, vision, and speech processing capabilities are incorporated into various software.
☆64Updated 2 weeks ago
Alternatives and similar repositories for awesome-minimax-integrations
Users that are interested in awesome-minimax-integrations are comparing it to the libraries listed below
Sorting:
- VeOmni: Scaling Any Modality Model Training with Model-Centric Distributed Recipe Zoo☆1,620Updated this week
- ☆813Updated 8 months ago
- ☆1,309Updated this week
- The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attention☆3,330Updated 7 months ago
- ☆1,475Updated 2 months ago
- Muon is Scalable for LLM Training☆1,426Updated 6 months ago
- ☆449Updated 6 months ago
- slime is an LLM post-training framework for RL Scaling.☆3,842Updated this week
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆964Updated last week
- ☆1,289Updated 2 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆631Updated this week
- MoBA: Mixture of Block Attention for Long-Context LLMs☆2,044Updated 10 months ago
- Discrete Diffusion Forcing (D2F): dLLMs Can Do Faster-Than-AR Inference☆241Updated last week
- Conditional Memory via Scalable Lookup: A New Axis of Sparsity for Large Language Models☆3,630Updated 3 weeks ago
- The official repo of One RL to See Them All: Visual Triple Unified Reinforcement Learning☆331Updated 8 months ago
- The most open diffusion language model for code generation — releasing pretraining, evaluation, inference, and checkpoints.☆519Updated 3 months ago
- Official implementation of "Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding"☆825Updated 2 weeks ago
- Block Diffusion for Ultra-Fast Speculative Decoding☆533Updated this week
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilities☆1,156Updated 6 months ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆643Updated 3 weeks ago
- 🔥 A minimal training framework for scaling FLA models☆344Updated 2 months ago
- An Open-source RL System from ByteDance Seed and Tsinghua AIR☆1,727Updated 9 months ago
- ☆209Updated 3 months ago
- Ring attention implementation with flash attention☆979Updated 5 months ago
- [ICLR 2026] On the Generalization of SFT: A Reinforcement Learning Perspective with Reward Rectification.☆532Updated last month
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆469Updated 8 months ago
- An Efficient and User-Friendly Scaling Library for Reinforcement Learning with Large Language Models☆2,781Updated this week
- Expert Parallelism Load Balancer☆1,344Updated 10 months ago
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆269Updated last week
- ☆145Updated 3 weeks ago