Tencent / Hunyuan-TurboSLinks
☆88Updated 4 months ago
Alternatives and similar repositories for Hunyuan-TurboS
Users that are interested in Hunyuan-TurboS are comparing it to the libraries listed below
Sorting:
- The open-source code of MetaStone-S1.☆108Updated last month
- The official repo for “Unleashing the Reasoning Potential of Pre-trained LLMs by Critique Fine-Tuning on One Problem” [EMNLP25]☆31Updated 2 weeks ago
- Code for Paper: Harnessing Webpage Uis For Text Rich Visual Understanding☆53Updated 9 months ago
- [EMNLP 2025] The official implementation for paper "Agentic-R1: Distilled Dual-Strategy Reasoning"☆99Updated 3 weeks ago
- ☆292Updated 3 months ago
- ☆81Updated 5 months ago
- Repo for "Z1: Efficient Test-time Scaling with Code"☆64Updated 5 months ago
- ☆129Updated 3 weeks ago
- ☆89Updated 4 months ago
- 😊 TPTT: Transforming Pretrained Transformers into Titans☆27Updated this week
- ☆97Updated last month
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆113Updated 4 months ago
- FuseAI Project☆87Updated 7 months ago
- Scaling Computer-Use Grounding via UI Decomposition and Synthesis☆109Updated 3 months ago
- Maya: An Instruction Finetuned Multilingual Multimodal Model using Aya☆116Updated last month
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆171Updated 2 months ago
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆90Updated 10 months ago
- The official repo of SynLogic: Synthesizing Verifiable Reasoning Data at Scale for Learning Logical Reasoning and Beyond☆166Updated 2 months ago
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆220Updated last week
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆161Updated 5 months ago
- ☆69Updated 3 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year
- General Reasoner: Advancing LLM Reasoning Across All Domains [NeurIPS25]☆171Updated 3 months ago
- ☆67Updated 5 months ago
- XVERSE-MoE-A36B: A multilingual large language model developed by XVERSE Technology Inc.☆38Updated last year
- ☆50Updated 3 months ago
- ☆100Updated 3 months ago
- ☆56Updated 10 months ago
- Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization☆71Updated this week
- Efficient Agent Training for Computer Use☆131Updated 2 weeks ago