Tencent / Hunyuan-TurboSLinks
☆87Updated 6 months ago
Alternatives and similar repositories for Hunyuan-TurboS
Users that are interested in Hunyuan-TurboS are comparing it to the libraries listed below
Sorting:
- The open-source code of MetaStone-S1.☆107Updated 4 months ago
- The official repo for “Unleashing the Reasoning Potential of Pre-trained LLMs by Critique Fine-Tuning on One Problem” [EMNLP25]☆33Updated 3 months ago
- ☆74Updated 5 months ago
- [EMNLP'25 Industry] Repo for "Z1: Efficient Test-time Scaling with Code"☆67Updated 8 months ago
- ☆85Updated 8 months ago
- Code for Paper: Harnessing Webpage Uis For Text Rich Visual Understanding☆53Updated last year
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆118Updated 6 months ago
- ☆98Updated 4 months ago
- Maya: An Instruction Finetuned Multilingual Multimodal Model using Aya☆123Updated 4 months ago
- ☆300Updated 6 months ago
- FuseAI Project☆87Updated 10 months ago
- QeRL enables RL for 32B LLMs on a single H100 GPU.☆466Updated 2 weeks ago
- [NeurIPS 2025] The official repo of SynLogic: Synthesizing Verifiable Reasoning Data at Scale for Learning Logical Reasoning and Beyond☆187Updated 5 months ago
- Ling-V2 is a MoE LLM provided and open-sourced by InclusionAI.☆245Updated 2 months ago
- LIMI: Less is More for Agency☆153Updated 2 months ago
- ☆185Updated this week
- XVERSE-MoE-A36B: A multilingual large language model developed by XVERSE Technology Inc.☆38Updated last year
- Ring-V2 is a reasoning MoE LLM provided and open-sourced by InclusionAI.☆81Updated last month
- Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization☆80Updated 2 months ago
- ☆56Updated last year
- ☆67Updated 8 months ago
- [EMNLP 2025] The official implementation for paper "Agentic-R1: Distilled Dual-Strategy Reasoning"☆100Updated 3 months ago
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆224Updated last month
- MiroMind-M1 is a fully open-source series of reasoning language models built on Qwen-2.5, focused on advancing mathematical reasoning.☆245Updated 4 months ago
- ☆74Updated 6 months ago
- Challenge LLMs to Reason About Reasoning: A Benchmark to Unveil Cognitive Depth in LLMs☆51Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year
- ☆105Updated 3 months ago
- ☆128Updated 7 months ago
- Computer Agent Arena: Test & compare AI agents in real desktop apps & web environments. Code/data coming soon!☆51Updated 8 months ago