inclusionAI / LingLinks
Ling is a MoE LLM provided and open-sourced by InclusionAI.
☆160Updated 3 weeks ago
Alternatives and similar repositories for Ling
Users that are interested in Ling are comparing it to the libraries listed below
Sorting:
- Build, evaluate and run General Multi-Agent Assistance with ease☆246Updated this week
- ☆223Updated last week
- A flexible and efficient training framework for large-scale alignment tasks☆372Updated this week
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆248Updated 3 weeks ago
- ☆210Updated last week
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆203Updated 3 months ago
- ☆151Updated last month
- The RedStone repository includes code for preparing extensive datasets used in training large language models.☆135Updated 2 weeks ago
- A Comprehensive Survey on Long Context Language Modeling☆147Updated this week
- a-m-team's exploration in large language modeling☆130Updated last week
- ☆362Updated this week
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆239Updated last month
- ☆358Updated this week
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆132Updated 11 months ago
- ☆283Updated 10 months ago
- ☆83Updated 3 weeks ago
- AutoCoA (Automatic generation of Chain-of-Action) is an agent model framework that enhances the multi-turn tool usage capability of reaso…☆114Updated 2 months ago
- Scaling Deep Research via Reinforcement Learning in Real-world Environments.☆420Updated last month
- The evaluation benchmark on MCP servers☆115Updated 2 weeks ago
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆367Updated 3 weeks ago
- An Open Math Pre-trainng Dataset with 370B Tokens.☆88Updated 2 months ago
- ☆189Updated last month
- An automated pipeline for evaluating LLMs for role-playing.☆185Updated 8 months ago
- R1-searcher: Incentivizing the Search Capability in LLMs via Reinforcement Learning☆548Updated last week
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆167Updated this week
- Real-time updated, fine-grained reading list on LLM-synthetic-data.🔥☆259Updated 4 months ago
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆110Updated this week
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆180Updated 2 months ago
- Collect every awesome work about r1!☆376Updated last month
- An Efficient and User-Friendly Scaling Library for Reinforcement Learning with Large Language Models☆626Updated this week