Dan-wanna-M / kbnfLinks
A high-performance constrained decoding engine based on context free grammar in Rust
☆56Updated 6 months ago
Alternatives and similar repositories for kbnf
Users that are interested in kbnf are comparing it to the libraries listed below
Sorting:
- Experimental compiler for deep learning models☆71Updated 2 months ago
- ☆58Updated 2 years ago
- Faster structured generation☆262Updated last month
- Fast serverless LLM inference, in Rust.☆108Updated last month
- This repository has code for fine-tuning LLMs with GRPO specifically for Rust Programming using cargo as feedback☆112Updated 8 months ago
- ☆36Updated last year
- implement llava using candle☆15Updated last year
- ☆135Updated last year
- ☆19Updated last year
- Inference Llama 2 in one file of zero-dependency, zero-unsafe Rust☆39Updated 2 years ago
- Dataflow is a data processing library, primarily for machine learning.☆24Updated 2 years ago
- Modular Rust transformer/LLM library using Candle☆37Updated last year
- A collection of optimisers for use with candle☆44Updated this week
- Inference engine for GLiNER models, in Rust☆79Updated 2 weeks ago
- High-performance MinHash implementation in Rust with Python bindings for efficient similarity estimation and deduplication of large datas…☆217Updated 2 months ago
- Low rank adaptation (LoRA) for Candle.☆168Updated 7 months ago
- Locality Sensitive Hashing☆76Updated 2 years ago
- Structured outputs for LLMs☆52Updated last year
- TensorRT-LLM server with Structured Outputs (JSON) built with Rust☆62Updated 7 months ago
- GPU based FFT written in Rust and CubeCL☆24Updated 5 months ago
- An extension library to Candle that provides PyTorch functions not currently available in Candle☆40Updated last year
- A Keras like abstraction layer on top of the Rust ML framework candle☆23Updated last year
- 8-bit floating point types for Rust☆61Updated this week
- Fast and versatile tokenizer for language models, compatible with SentencePiece, Tokenizers, Tiktoken and more. Supports BPE, Unigram and…☆39Updated last month
- Efficent platform for inference and serving local LLMs including an OpenAI compatible API server.☆540Updated this week
- Rust client for the huggingface hub aiming for minimal subset of features over `huggingface-hub` python package☆243Updated 2 weeks ago
- ☆26Updated 7 months ago
- Implementing the BitNet model in Rust☆42Updated last year
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rust☆79Updated last year
- High-level, optionally asynchronous Rust bindings to llama.cpp☆235Updated last year