s-smits / grpo-optunaLinks
Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna
☆59Updated last month
Alternatives and similar repositories for grpo-optuna
Users that are interested in grpo-optuna are comparing it to the libraries listed below
Sorting:
- ☆68Updated 6 months ago
- Simple GRPO scripts and configurations.☆59Updated 9 months ago
- ☆55Updated last year
- Train your own SOTA deductive reasoning model☆107Updated 8 months ago
- Streamline on-policy/off-policy distillation workflows in a few lines of code☆65Updated last week
- Project code for training LLMs to write better unit tests + code☆21Updated 6 months ago
- Lego for GRPO☆30Updated 6 months ago
- look how they massacred my boy☆63Updated last year
- Latent Large Language Models☆19Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆108Updated 8 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆99Updated 4 months ago
- ☆52Updated 9 months ago
- ☆40Updated last year
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆60Updated 6 months ago
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆91Updated 10 months ago
- Nexusflow function call, tool use, and agent benchmarks.☆30Updated 11 months ago
- An introduction to LLM Sampling☆79Updated 11 months ago
- ☆15Updated 7 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 10 months ago
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆22Updated last year
- Maya: An Instruction Finetuned Multilingual Multimodal Model using Aya☆123Updated 3 months ago
- Small, simple agent task environments for training and evaluation☆19Updated last year
- QAlign is a new test-time alignment approach that improves language model performance by using Markov chain Monte Carlo methods.☆25Updated 3 weeks ago
- Luth is a state-of-the-art series of fine-tuned LLMs for French☆40Updated last month
- Verifiers for LLM Reinforcement Learning☆80Updated 7 months ago
- ☆19Updated last year
- Efficient non-uniform quantization with GPTQ for GGUF☆53Updated 2 months ago
- Collection of autoregressive model implementation☆86Updated 7 months ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆84Updated 8 months ago