Dan-wanna-M / kbnfLinks
A high-performance constrained decoding engine based on context free grammar in Rust
☆55Updated 4 months ago
Alternatives and similar repositories for kbnf
Users that are interested in kbnf are comparing it to the libraries listed below
Sorting:
- Faster structured generation☆252Updated 4 months ago
- Fast serverless LLM inference, in Rust.☆93Updated 7 months ago
- This repository has code for fine-tuning LLMs with GRPO specifically for Rust Programming using cargo as feedback☆105Updated 6 months ago
- Experimental compiler for deep learning models☆67Updated 2 weeks ago
- implement llava using candle☆15Updated last year
- ☆133Updated last year
- ☆33Updated 10 months ago
- Modular Rust transformer/LLM library using Candle☆37Updated last year
- High-performance MinHash implementation in Rust with Python bindings for efficient similarity estimation and deduplication of large datas…☆204Updated 2 months ago
- Inference Llama 2 in one file of zero-dependency, zero-unsafe Rust☆39Updated 2 years ago
- ☆20Updated 11 months ago
- ☆58Updated 2 years ago
- A collection of optimisers for use with candle☆40Updated last month
- Inference engine for GLiNER models, in Rust☆70Updated 3 months ago
- TensorRT-LLM server with Structured Outputs (JSON) built with Rust☆60Updated 5 months ago
- Efficent platform for inference and serving local LLMs including an OpenAI compatible API server.☆473Updated last week
- Low rank adaptation (LoRA) for Candle.☆162Updated 5 months ago
- Formatron empowers everyone to control the format of language models' output with minimal overhead.☆225Updated 3 months ago
- Structured outputs for LLMs☆51Updated last year
- Andrej Karpathy's Let's build GPT: from scratch video & notebook implemented in Rust + candle☆76Updated last year
- ☆24Updated 5 months ago
- GPU based FFT written in Rust and CubeCL☆23Updated 3 months ago
- An extension library to Candle that provides PyTorch functions not currently available in Candle☆39Updated last year
- High-level, optionally asynchronous Rust bindings to llama.cpp☆229Updated last year
- Unofficial Rust bindings to Apple's mlx framework☆192Updated last week
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rust☆79Updated last year
- A Keras like abstraction layer on top of the Rust ML framework candle☆23Updated last year
- LLaMa 7b with CUDA acceleration implemented in rust. Minimal GPU memory needed!☆109Updated 2 years ago
- Locality Sensitive Hashing☆74Updated 2 years ago
- Implementing the BitNet model in Rust☆39Updated last year