Oxen-AI / GRPO-With-Cargo-FeedbackLinks
This repository has code for fine-tuning LLMs with GRPO specifically for Rust Programming using cargo as feedback
☆112Updated 9 months ago
Alternatives and similar repositories for GRPO-With-Cargo-Feedback
Users that are interested in GRPO-With-Cargo-Feedback are comparing it to the libraries listed below
Sorting:
- High-performance MinHash implementation in Rust with Python bindings for efficient similarity estimation and deduplication of large datas…☆223Updated 2 weeks ago
- implement llava using candle☆15Updated last year
- Fast serverless LLM inference, in Rust.☆108Updated last month
- ☆135Updated last year
- Train your own SOTA deductive reasoning model☆107Updated 9 months ago
- A high-performance constrained decoding engine based on context free grammar in Rust☆56Updated 7 months ago
- Inference engine for GLiNER models, in Rust☆81Updated last month
- ☆140Updated last year
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated 2 months ago
- TensorRT-LLM server with Structured Outputs (JSON) built with Rust☆64Updated 8 months ago
- ☆68Updated 7 months ago
- Inference Llama 2 in one file of zero-dependency, zero-unsafe Rust☆39Updated 2 years ago
- The Easiest Rust Interface for Local LLMs and an Interface for Deterministic Signals from Probabilistic LLM Vibes☆240Updated 4 months ago
- Faster structured generation☆265Updated 2 weeks ago
- Performance centered DSPy rewrite to(not port) Rust☆195Updated last week
- Super basic implementation (gist-like) of RLMs with REPL environments.☆286Updated 2 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆99Updated 5 months ago
- Rust implementation of Surya☆63Updated 9 months ago
- A tree-based prefix cache library that allows rapid creation of looms: hierarchal branching pathways of LLM generations.☆77Updated 10 months ago
- Unofficial Rust bindings to Apple's mlx framework☆219Updated last week
- ☆13Updated this week
- Simple high-throughput inference library☆153Updated 7 months ago
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rust☆79Updated last year
- ☆36Updated 4 months ago
- look how they massacred my boy☆63Updated last year
- Transformers provides a simple, intuitive interface for Rust developers who want to work with Large Language Models locally, powered by t…☆21Updated this week
- ☆136Updated 9 months ago
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆62Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆104Updated 7 months ago
- NanoGPT (124M) quality in 2.67B tokens☆28Updated 3 months ago