simplescaling / s1Links
s1: Simple test-time scaling
☆6,447Updated last month
Alternatives and similar repositories for s1
Users that are interested in s1 are comparing it to the libraries listed below
Sorting:
- Minimal reproduction of DeepSeek R1-Zero☆11,909Updated last month
- Democratizing Reinforcement Learning for LLMs☆3,378Updated last month
- Sky-T1: Train your own O1 preview model within $450☆3,268Updated last month
- Simple RL training for reasoning☆3,627Updated 2 months ago
- verl: Volcano Engine Reinforcement Learning for LLMs☆9,710Updated this week
- Fully open reproduction of DeepSeek-R1☆24,819Updated 2 weeks ago
- Witness the aha moment of VLM with less than $3.☆3,768Updated last month
- Fully open data curation for reasoning models☆1,921Updated 2 weeks ago
- ☆3,363Updated 3 months ago
- SGLang is a fast serving framework for large language models and vision language models.☆15,276Updated this week
- The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attention☆2,891Updated this week
- A live stream development of RL tunning for LLM agents☆3,022Updated 3 weeks ago
- Search-R1: An Efficient, Scalable RL Training Framework for Reasoning & Search Engine Calling interleaved LLM based on veRL☆2,612Updated 2 weeks ago
- Curated list of datasets and tools for post-training.☆3,158Updated 4 months ago
- Official PyTorch implementation for "Large Language Diffusion Models"☆2,332Updated this week
- Qwen2.5-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.☆10,997Updated last month
- Janus-Series: Unified Multimodal Understanding and Generation Models☆17,380Updated 4 months ago
- ☆3,717Updated last month
- AllenAI's post-training codebase☆3,018Updated this week
- An Easy-to-use, Scalable and High-performance RLHF Framework based on Ray (PPO & GRPO & REINFORCE++ & vLLM & Ray & Dynamic Sampling & Asy…☆7,145Updated this week
- Everything about the SmolLM2 and SmolVLM family of models☆2,574Updated 2 months ago
- The simplest, fastest repository for training/finetuning small-sized VLMs.☆3,418Updated this week
- Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train Qwen3, Llama 4, DeepSeek-R1, Gemma 3, TTS 2x faster with 70% less VRAM.☆40,815Updated this week
- Qwen2.5-Coder is the code version of Qwen2.5, the large language model series developed by Qwen team, Alibaba Cloud.☆5,013Updated this week
- RAGEN leverages reinforcement learning to train LLM reasoning agents in interactive, stochastic environments.☆1,990Updated 2 weeks ago
- Qwen3 is the large language model series developed by Qwen team, Alibaba Cloud.☆22,102Updated last week
- Agent framework and applications built upon Qwen>=3.0, featuring Function Calling, MCP, Code Interpreter, RAG, Chrome extension, etc.☆9,608Updated this week
- MoBA: Mixture of Block Attention for Long-Context LLMs☆1,798Updated 2 months ago
- 🤗 smolagents: a barebones library for agents that think in code.☆20,182Updated this week
- NanoGPT (124M) in 3 minutes☆2,660Updated this week