Oxen-AI / GRPO-With-Cargo-FeedbackLinks
This repository has code for fine-tuning LLMs with GRPO specifically for Rust Programming using cargo as feedback
☆114Updated 10 months ago
Alternatives and similar repositories for GRPO-With-Cargo-Feedback
Users that are interested in GRPO-With-Cargo-Feedback are comparing it to the libraries listed below
Sorting:
- High-performance MinHash implementation in Rust with Python bindings for efficient similarity estimation and deduplication of large datas…☆225Updated last week
- Fast serverless LLM inference, in Rust.☆108Updated 2 months ago
- ☆135Updated last year
- Inference Llama 2 in one file of zero-dependency, zero-unsafe Rust☆39Updated 2 years ago
- A high-performance constrained decoding engine based on context free grammar in Rust☆58Updated 7 months ago
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rust☆79Updated 2 years ago
- implement llava using candle☆15Updated last year
- TensorRT-LLM server with Structured Outputs (JSON) built with Rust☆65Updated 8 months ago
- ☆140Updated last year
- Train your own SOTA deductive reasoning model☆107Updated 10 months ago
- Performance centered DSPy rewrite to(not port) Rust☆207Updated this week
- ☆68Updated 7 months ago
- Rust implementation of Surya☆64Updated 10 months ago
- ☆67Updated last month
- Faster structured generation☆270Updated last week
- Unofficial Rust bindings to Apple's mlx framework☆233Updated last week
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆101Updated 6 months ago
- A tree-based prefix cache library that allows rapid creation of looms: hierarchal branching pathways of LLM generations.☆77Updated 11 months ago
- Inference engine for GLiNER models, in Rust☆81Updated last week
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated 3 months ago
- Low rank adaptation (LoRA) for Candle.☆169Updated 9 months ago
- The Easiest Rust Interface for Local LLMs and an Interface for Deterministic Signals from Probabilistic LLM Vibes☆239Updated 5 months ago
- Built for demanding AI workflows, this gateway offers low-latency, provider-agnostic access, ensuring your AI applications run smoothly a…☆88Updated 7 months ago
- A simple, CUDA or CPU powered, library for creating vector embeddings using Candle and models from Hugging Face☆46Updated last year
- Experimental compiler for deep learning models☆74Updated 4 months ago
- Candle Pipelines provides a simple, intuitive interface for Rust developers who want to work with Large Language Models locally, powered …☆21Updated 2 weeks ago
- A collection of optimisers for use with candle☆45Updated 3 weeks ago
- ☆13Updated 3 weeks ago
- ☆38Updated 5 months ago
- OpenAI compatible API for serving LLAMA-2 model☆218Updated 2 years ago