QwenLM / QwQLinks
QwQ is the reasoning model series developed by Qwen team, Alibaba Cloud.
☆529Updated 9 months ago
Alternatives and similar repositories for QwQ
Users that are interested in QwQ are comparing it to the libraries listed below
Sorting:
- This repository introduce a comprehensive paper list, datasets, methods and tools for memory research.☆333Updated last week
- Train your Agent model via our easy and efficient framework☆1,677Updated last month
- Deep Research Agent CognitiveKernel-Pro from Tencent AI Lab. Paper: https://arxiv.org/pdf/2508.00414☆479Updated 2 months ago
- ☆331Updated 4 months ago
- MiroThinker is a series of open-source search agent designed to advance tool-augmented reasoning and information-seeking capabilities.☆1,455Updated this week
- ☆981Updated this week
- adds Sequence Parallelism into LLaMA-Factory☆600Updated 2 months ago
- Moxin is a family of fully open-source and reproducible LLMs☆621Updated 6 months ago
- Think Beyond Images☆546Updated 3 months ago
- [COLM’25] DeepRetrieval — 🔥 Training Search Agent by RLVR with Retrieval Outcome☆687Updated 2 months ago
- Step-DeepResearch☆350Updated 2 weeks ago
- MiroMind Research Agent: Fully Open-Source Deep Research Agent with Reproducible State-of-the-Art Performance on FutureX, GAIA, HLE, Brow…☆1,709Updated last month
- ☆540Updated 3 months ago
- A scalable, end-to-end training pipeline for general-purpose agents☆363Updated 6 months ago
- The official implementation of Self-Play Preference Optimization (SPPO)☆583Updated 11 months ago
- ☆817Updated 6 months ago
- A Comprehensive Benchmark for Routing LLMs to Explore Model-level Scaling Up in Large Language Models☆104Updated last month
- The official implementation of the ICML 2024 paper "MemoryLLM: Towards Self-Updatable Large Language Models" and "M+: Extending MemoryLLM…☆280Updated 5 months ago
- ☆498Updated 3 weeks ago
- [ICML 2025 Oral] CodeI/O: Condensing Reasoning Patterns via Code Input-Output Prediction☆566Updated 8 months ago
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆465Updated 7 months ago
- minimal-cost for training 0.5B R1-Zero☆799Updated 7 months ago
- Official codebase for "Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling".☆278Updated 10 months ago
- Ling is a MoE LLM provided and open-sourced by InclusionAI.☆239Updated 7 months ago
- DataFlex is a data-centric training framework that enhances model performance by either selecting the most influential samples, optimizin…☆92Updated this week
- [NeurIPS 2024] BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models☆281Updated 9 months ago
- ✨ A synthetic dataset generation framework that produces diverse coding questions and verifiable solutions - all in one framwork☆301Updated 4 months ago
- MiroTrain is an efficient and algorithm-first framework for post-training large agentic models.☆100Updated 4 months ago
- verl-agent is an extension of veRL, designed for training LLM/VLM agents via RL. verl-agent is also the official code for paper "Group-in…☆1,345Updated 3 weeks ago
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆464Updated this week