Tongyi-Zhiwen / Qwen-DocLinks
☆374Updated this week
Alternatives and similar repositories for Qwen-Doc
Users that are interested in Qwen-Doc are comparing it to the libraries listed below
Sorting:
- ☆174Updated 7 months ago
- Implementation for OAgents: An Empirical Study of Building Effective Agents☆292Updated 2 months ago
- ☆85Updated 8 months ago
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆462Updated 7 months ago
- MiroMind-M1 is a fully open-source series of reasoning language models built on Qwen-2.5, focused on advancing mathematical reasoning.☆245Updated 4 months ago
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆263Updated 5 months ago
- Ling is a MoE LLM provided and open-sourced by InclusionAI.☆237Updated 7 months ago
- ☆188Updated last week
- ☆92Updated 7 months ago
- ☆320Updated last year
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆119Updated 7 months ago
- [NeurIPS 2025] The official repo of SynLogic: Synthesizing Verifiable Reasoning Data at Scale for Learning Logical Reasoning and Beyond☆187Updated 5 months ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆221Updated 4 months ago
- Efficient Agent Training for Computer Use☆133Updated 3 months ago
- Data Synthesis for Deep Research Based on Semi-Structured Data☆186Updated this week
- Repo for "MaskSearch: A Universal Pre-Training Framework to Enhance Agentic Search Capability"☆146Updated 6 months ago
- The code for paper: Decoupled Planning and Execution: A Hierarchical Reasoning Framework for Deep Search☆63Updated 5 months ago
- AutoCoA (Automatic generation of Chain-of-Action) is an agent model framework that enhances the multi-turn tool usage capability of reaso…☆129Updated 9 months ago
- Ling-V2 is a MoE LLM provided and open-sourced by InclusionAI.☆245Updated 2 months ago
- MrlX: A Multi-Agent Reinforcement Learning Framework☆153Updated 3 weeks ago
- A Large-Scale, Challenging, Decontaminated, and Verifiable Mathematical Dataset for Advancing Reasoning☆280Updated 2 months ago
- Mixture-of-Experts (MoE) Language Model☆192Updated last year
- An Open-Source Large-Scale Reinforcement Learning Project for Search Agents☆511Updated 3 weeks ago
- Official repository for DR Tulu: Reinforcement Learning with Evolving Rubrics for Deep Research☆470Updated this week
- The RedStone repository includes code for preparing extensive datasets used in training large language models.☆146Updated 5 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year
- OpenSeek aims to unite the global open source community to drive collaborative innovation in algorithms, data and systems to develop next…☆241Updated this week
- ☆818Updated 6 months ago
- Official Repository for "Glyph: Scaling Context Windows via Visual-Text Compression"☆524Updated last month
- [NeurIPS 2025 Spotlight] ReasonFlux (long-CoT), ReasonFlux-PRM (process reward model) and ReasonFlux-Coder (code generation)☆508Updated 2 months ago