Tencent / Hunyuan-TurboSLinks
☆88Updated 8 months ago
Alternatives and similar repositories for Hunyuan-TurboS
Users that are interested in Hunyuan-TurboS are comparing it to the libraries listed below
Sorting:
- The open-source code of MetaStone-S1.☆105Updated 6 months ago
- Ring-V2 is a reasoning MoE LLM provided and open-sourced by InclusionAI.☆90Updated 3 months ago
- The official repo for “Unleashing the Reasoning Potential of Pre-trained LLMs by Critique Fine-Tuning on One Problem” [EMNLP25]☆34Updated 5 months ago
- Ling-V2 is a MoE LLM provided and open-sourced by InclusionAI.☆255Updated 4 months ago
- Code for Paper: Harnessing Webpage Uis For Text Rich Visual Understanding☆53Updated last year
- [EMNLP 2025] The official implementation for paper "Agentic-R1: Distilled Dual-Strategy Reasoning"☆102Updated 5 months ago
- [EMNLP'25 Industry] Repo for "Z1: Efficient Test-time Scaling with Code"☆68Updated 10 months ago
- ☆82Updated 10 months ago
- 😊 TPTT: Transforming Pretrained Transformers into Titans☆57Updated 2 months ago
- ☆100Updated 6 months ago
- [NeurIPS 2025] The official repo of SynLogic: Synthesizing Verifiable Reasoning Data at Scale for Learning Logical Reasoning and Beyond☆191Updated 7 months ago
- PeRL: Parameter-Efficient Reinforcement Learning☆68Updated 3 weeks ago
- ☆93Updated 8 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆139Updated last year
- ☆111Updated 4 months ago
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆121Updated 8 months ago
- Official Repository of Native Parallel Reasoner☆100Updated this week
- ☆50Updated 8 months ago
- FuseAI Project☆87Updated last year
- ☆78Updated 7 months ago
- ☆74Updated 8 months ago
- ☆67Updated 10 months ago
- ☆84Updated 3 months ago
- [ICLR 2026] Efficient Agent Training for Computer Use☆135Updated 5 months ago
- GRadient-INformed MoE☆264Updated last year
- Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization☆81Updated last month
- MiroTrain is an efficient and algorithm-first framework research agent.☆132Updated 5 months ago
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆228Updated 3 months ago
- Maya: An Instruction Finetuned Multilingual Multimodal Model using Aya☆125Updated 6 months ago
- ☆19Updated 11 months ago