rosmineb / unit_test_rlLinks
Project code for training LLMs to write better unit tests + code
☆20Updated last month
Alternatives and similar repositories for unit_test_rl
Users that are interested in unit_test_rl are comparing it to the libraries listed below
Sorting:
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆53Updated 4 months ago
- Tiny evaluation of leading LLMs on competitive programming problems☆14Updated 7 months ago
- ☆10Updated 2 months ago
- Simple GRPO scripts and configurations.☆58Updated 4 months ago
- ☆38Updated 11 months ago
- ☆63Updated last month
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆22Updated 7 months ago
- Lego for GRPO☆28Updated last month
- alternative way to calculating self attention☆18Updated last year
- Simple repository for training small reasoning models☆33Updated 4 months ago
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- ☆47Updated 4 months ago
- Latent Large Language Models☆18Updated 10 months ago
- Karpathy's llama2.c transpiled to MLX for Apple Silicon☆15Updated last year
- look how they massacred my boy☆63Updated 8 months ago
- Testing paligemma2 finetuning on reasoning dataset☆18Updated 6 months ago
- Official repo for Learning to Reason for Long-Form Story Generation☆63Updated 2 months ago
- Fast, High-Fidelity LLM Decoding with Regex Constraints☆20Updated 11 months ago
- ☆23Updated last month
- Synthetic data generation and benchmark implementation for "Episodic Memories Generation and Evaluation Benchmark for Large Language Mode…☆45Updated 2 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆66Updated 2 months ago
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆41Updated last month
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆17Updated 3 months ago
- ☆51Updated 7 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆82Updated 3 weeks ago
- Using modal.com to process FineWeb-edu data☆20Updated 2 months ago
- Using multiple LLMs for ensemble Forecasting☆16Updated last year
- ☆47Updated last year
- BH hackathon☆14Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year