inclusionAI / Ling-V2Links
Ling-V2 is a MoE LLM provided and open-sourced by InclusionAI.
☆121Updated last week
Alternatives and similar repositories for Ling-V2
Users that are interested in Ling-V2 are comparing it to the libraries listed below
Sorting:
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆113Updated 4 months ago
- ☆293Updated 4 months ago
- MiroMind-M1 is a fully open-source series of reasoning language models built on Qwen-2.5, focused on advancing mathematical reasoning.☆236Updated 2 months ago
- ☆168Updated 5 months ago
- Ling is a MoE LLM provided and open-sourced by InclusionAI.☆202Updated 4 months ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆191Updated last week
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆57Updated 10 months ago
- MiroTrain is an efficient and algorithm-first framework for post-training large agentic models.☆88Updated last month
- ☆71Updated 4 months ago
- ☆84Updated 6 months ago
- ☆72Updated 3 months ago
- ☆95Updated 10 months ago
- MiroRL is an MCP-first reinforcement learning framework for deep research agent.☆163Updated last month
- Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization☆73Updated 2 weeks ago
- Pre-trained, Scalable, High-performance Reward Models via Policy Discriminative Learning.☆157Updated 2 weeks ago
- [COLM 2025] An Open Math Pre-trainng Dataset with 370B Tokens.☆100Updated 6 months ago
- The RedStone repository includes code for preparing extensive datasets used in training large language models.☆142Updated 3 months ago
- ☆89Updated 4 months ago
- Revisiting Mid-training in the Era of Reinforcement Learning Scaling☆176Updated 2 months ago
- ☆97Updated 2 months ago
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆263Updated 3 months ago
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆190Updated 6 months ago
- Reformatted Alignment☆112Updated last year
- ☆203Updated 5 months ago
- ☆49Updated last year
- [EMNLP 2025 Industry] Repo for "Z1: Efficient Test-time Scaling with Code"☆64Updated 6 months ago
- ☆78Updated last month
- ☆96Updated 3 weeks ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆217Updated 2 months ago