Danau5tin / calculator_agent_rlLinks
Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.
☆49Updated 4 months ago
Alternatives and similar repositories for calculator_agent_rl
Users that are interested in calculator_agent_rl are comparing it to the libraries listed below
Sorting:
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆172Updated 7 months ago
- Official repo for Learning to Reason for Long-Form Story Generation☆69Updated 4 months ago
- ☆68Updated 3 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 7 months ago
- ☆133Updated 5 months ago
- Train your own SOTA deductive reasoning model☆106Updated 6 months ago
- Simple GRPO scripts and configurations.☆59Updated 7 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆182Updated 6 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆97Updated last month
- accompanying material for sleep-time compute paper☆108Updated 4 months ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆77Updated 5 months ago
- Source code for the collaborative reasoner research project at Meta FAIR.☆102Updated 4 months ago
- Functional Benchmarks and the Reasoning Gap☆88Updated 11 months ago
- Training-Ready RL Environments + Evals☆90Updated this week
- An introduction to LLM Sampling☆79Updated 9 months ago
- ☆80Updated last week
- ☆39Updated last year
- Repository for the paper Stream of Search: Learning to Search in Language☆150Updated 7 months ago
- ☆122Updated 6 months ago
- ☆54Updated 10 months ago
- Storing long contexts in tiny caches with self-study☆179Updated last week
- QAlign is a new test-time alignment approach that improves language model performance by using Markov chain Monte Carlo methods.☆24Updated this week
- ☆49Updated 7 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆291Updated 3 weeks ago
- ☆99Updated this week
- rl from zero pretrain, can it be done? yes.☆265Updated 3 weeks ago
- A framework for optimizing DSPy programs with RL☆172Updated this week
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆96Updated last month
- ☆88Updated last year
- Simple repository for training small reasoning models☆40Updated 7 months ago