inclusionAI / LingLinks
Ling is a MoE LLM provided and open-sourced by InclusionAI.
☆201Updated 4 months ago
Alternatives and similar repositories for Ling
Users that are interested in Ling are comparing it to the libraries listed below
Sorting:
- ☆292Updated 3 months ago
- ☆165Updated 4 months ago
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆341Updated this week
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆443Updated 4 months ago
- ☆201Updated 5 months ago
- The RedStone repository includes code for preparing extensive datasets used in training large language models.☆143Updated 2 months ago
- Deep Research Agent CognitiveKernel-Pro from Tencent AI Lab. Paper: https://arxiv.org/pdf/2508.00414☆336Updated 3 weeks ago
- ☆814Updated 3 months ago
- a toolkit on knowledge distillation for large language models☆153Updated this week
- MiroThinker is open-source agentic models trained for deep research and complex tool use scenarios.☆314Updated this week
- Agentic Foundation Platform☆471Updated this week
- Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI, derived from Ling.☆100Updated last month
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆262Updated 2 months ago
- OpenSeek aims to unite the global open source community to drive collaborative innovation in algorithms, data and systems to develop next…☆227Updated last week
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆188Updated 2 months ago
- ☆89Updated 4 months ago
- Mixture-of-Experts (MoE) Language Model☆190Updated last year
- MiroMind-M1 is a fully open-source series of reasoning language models built on Qwen-2.5, focused on advancing mathematical reasoning.☆230Updated last month
- A Comprehensive Survey on Long Context Language Modeling☆187Updated 2 months ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆212Updated last month
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆250Updated 6 months ago
- ☆31Updated last week
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆113Updated 4 months ago
- Scaling Preference Data Curation via Human-AI Synergy☆107Updated 2 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year
- A flexible and efficient training framework for large-scale alignment tasks☆422Updated last week
- ☆69Updated 3 months ago
- An automated pipeline for evaluating LLMs for role-playing.☆198Updated last year
- ☆586Updated 2 months ago
- [ICML 2025 Oral] CodeI/O: Condensing Reasoning Patterns via Code Input-Output Prediction☆548Updated 4 months ago