MBZUAI-IFM / K2-Think-SFTLinks
☆126Updated 2 months ago
Alternatives and similar repositories for K2-Think-SFT
Users that are interested in K2-Think-SFT are comparing it to the libraries listed below
Sorting:
- The offical repo for "Parallel-R1: Towards Parallel Thinking via Reinforcement Learning"☆233Updated this week
- All information and news with respect to Falcon-H1 series☆93Updated last month
- The code repository of the paper: Competition and Attraction Improve Model Fusion☆165Updated 2 months ago
- Training teachers with reinforcement learning able to make LLMs learn how to reason for test time scaling.☆349Updated 4 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆299Updated 2 weeks ago
- Tiny Model, Big Logic: Diversity-Driven Optimization Elicits Large-Model Reasoning Ability in VibeThinker-1.5B☆155Updated last week
- ☆300Updated 3 months ago
- Research code artifacts for Code World Model (CWM) including inference tools, reproducibility, and documentation.☆712Updated last month
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆129Updated 3 months ago
- [EMNLP 2025] The official implementation for paper "Agentic-R1: Distilled Dual-Strategy Reasoning"☆101Updated 2 months ago
- Sparse Inferencing for transformer based LLMs☆208Updated 3 months ago
- accompanying material for sleep-time compute paper☆117Updated 6 months ago
- GRadient-INformed MoE☆264Updated last year
- Official implementation of "Continuous Autoregressive Language Models"☆584Updated last week
- [ACL 2025] How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training☆44Updated 4 months ago
- LIMI: Less is More for Agency☆148Updated last month
- Matrix (Multi-Agent daTa geneRation Infra and eXperimentation framework) is a versatile engine for multi-agent conversational data genera…☆101Updated this week
- Train, tune, and infer Bamba model☆136Updated 5 months ago
- The official code implementation for "Cache-to-Cache: Direct Semantic Communication Between Large Language Models"☆259Updated 2 weeks ago
- EvaByte: Efficient Byte-level Language Models at Scale☆110Updated 6 months ago
- Nexusflow function call, tool use, and agent benchmarks.☆29Updated 11 months ago
- PyTorch implementation of models from the Zamba2 series.☆185Updated 9 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆98Updated 6 months ago
- ☆702Updated last month
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆82Updated 8 months ago
- ☆62Updated 4 months ago
- Efficient non-uniform quantization with GPTQ for GGUF☆53Updated 2 months ago
- RLP: Reinforcement as a Pretraining Objective☆200Updated last month
- ☆180Updated 3 months ago
- Code to accompany the Universal Deep Research paper (https://arxiv.org/abs/2509.00244)☆449Updated 2 months ago