SakanaAI / RLTLinks
Training teachers with reinforcement learning able to make LLMs learn how to reason for test time scaling.
☆295Updated 3 weeks ago
Alternatives and similar repositories for RLT
Users that are interested in RLT are comparing it to the libraries listed below
Sorting:
- Tina: Tiny Reasoning Models via LoRA☆266Updated last month
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆112Updated this week
- ☆210Updated 4 months ago
- Official Code Repository for the paper "Distilling LLM Agent into Small Models with Retrieval and Code Tools"☆115Updated last month
- ☆156Updated 2 months ago
- Source code for the collaborative reasoner research project at Meta FAIR.☆94Updated 2 months ago
- [ACL 2025] Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆95Updated last month
- Code for the paper: "Learning to Reason without External Rewards"☆319Updated this week
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 6 months ago
- accompanying material for sleep-time compute paper☆97Updated 2 months ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆71Updated 3 months ago
- Code that accompanies the public release of the paper Lost in Conversation (https://arxiv.org/abs/2505.06120)☆141Updated 3 weeks ago
- ☆80Updated 2 weeks ago
- SiriuS: Self-improving Multi-agent Systems via Bootstrapped Reasoning☆60Updated this week
- Scaling RL on advanced reasoning models☆392Updated this week
- Train your own SOTA deductive reasoning model☆96Updated 4 months ago
- The official repository of ALE-Bench☆98Updated this week
- Official code repository for Sketch-of-Thought (SoT)☆124Updated 2 months ago
- GRadient-INformed MoE☆263Updated 9 months ago
- Official implementation of the paper "Soft Thinking: Unlocking the Reasoning Potential of LLMs in Continuous Concept Space"☆184Updated this week
- Exploring Applications of GRPO☆240Updated this week
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆216Updated 2 weeks ago
- ☆162Updated 2 months ago
- Complex Function Calling Benchmark.☆117Updated 5 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆341Updated 7 months ago
- Repo for "Z1: Efficient Test-time Scaling with Code"☆63Updated 3 months ago
- Resources for our paper: "Agent-R: Training Language Model Agents to Reflect via Iterative Self-Training"☆150Updated last month
- EvaByte: Efficient Byte-level Language Models at Scale☆103Updated 2 months ago
- Multi-Granularity LLM Debugger☆82Updated last week
- ☆128Updated 3 months ago