Oxen-AI / GRPO-With-Cargo-FeedbackLinks
This repository has code for fine-tuning LLMs with GRPO specifically for Rust Programming using cargo as feedback
☆105Updated 6 months ago
Alternatives and similar repositories for GRPO-With-Cargo-Feedback
Users that are interested in GRPO-With-Cargo-Feedback are comparing it to the libraries listed below
Sorting:
- High-performance MinHash implementation in Rust with Python bindings for efficient similarity estimation and deduplication of large datas…☆204Updated 2 months ago
- ☆133Updated last year
- Train your own SOTA deductive reasoning model☆107Updated 6 months ago
- Fast serverless LLM inference, in Rust.☆93Updated 7 months ago
- ☆68Updated 4 months ago
- A DSPy rewrite to(not port) Rust☆104Updated this week
- Inference Llama 2 in one file of zero-dependency, zero-unsafe Rust☆39Updated 2 years ago
- Rust implementation of Surya☆60Updated 7 months ago
- A high-performance constrained decoding engine based on context free grammar in Rust☆55Updated 4 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 8 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆98Updated 2 months ago
- Inference engine for GLiNER models, in Rust☆71Updated 3 months ago
- implement llava using candle☆15Updated last year
- TensorRT-LLM server with Structured Outputs (JSON) built with Rust☆60Updated 5 months ago
- A tree-based prefix cache library that allows rapid creation of looms: hierarchal branching pathways of LLM generations.☆74Updated 7 months ago
- ☆139Updated last year
- Faster structured generation☆252Updated 4 months ago
- Unofficial Rust bindings to Apple's mlx framework☆192Updated last week
- Experimental compiler for deep learning models☆67Updated 2 weeks ago
- ☆133Updated 6 months ago
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rust☆79Updated last year
- Pivotal Token Search☆126Updated 2 months ago
- Official Rust Implementation of Model2Vec☆138Updated this week
- Built for demanding AI workflows, this gateway offers low-latency, provider-agnostic access, ensuring your AI applications run smoothly a…☆78Updated 4 months ago
- ☆35Updated 2 months ago
- Storing long contexts in tiny caches with self-study☆192Updated 3 weeks ago
- Easy to use, High Performant Knowledge Distillation for LLMs☆93Updated 5 months ago
- Simple examples using Argilla tools to build AI☆55Updated 10 months ago
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆83Updated last month
- Simple high-throughput inference library☆139Updated 4 months ago