Dan-wanna-M / kbnfLinks
A high-performance constrained decoding engine based on context free grammar in Rust
☆54Updated last month
Alternatives and similar repositories for kbnf
Users that are interested in kbnf are comparing it to the libraries listed below
Sorting:
- Experimental compiler for deep learning models☆68Updated last month
- implement llava using candle☆15Updated last year
- Fast serverless LLM inference, in Rust.☆88Updated 4 months ago
- Inference Llama 2 in one file of zero-dependency, zero-unsafe Rust☆38Updated last year
- Modular Rust transformer/LLM library using Candle☆36Updated last year
- Faster structured generation☆230Updated last month
- ☆58Updated 2 years ago
- ☆130Updated last year
- High-performance MinHash implementation in Rust with Python bindings for efficient similarity estimation and deduplication of large datas…☆187Updated 3 weeks ago
- This repository has code for fine-tuning LLMs with GRPO specifically for Rust Programming using cargo as feedback☆97Updated 4 months ago
- ☆20Updated 9 months ago
- ☆30Updated 7 months ago
- GPU based FFT written in Rust and CubeCL☆23Updated last month
- A collection of optimisers for use with candle☆36Updated last month
- TensorRT-LLM server with Structured Outputs (JSON) built with Rust☆55Updated 2 months ago
- A Keras like abstraction layer on top of the Rust ML framework candle☆23Updated last year
- Inference engine for GLiNER models, in Rust☆61Updated last week
- 8-bit floating point types for Rust☆47Updated 4 months ago
- A simple, CUDA or CPU powered, library for creating vector embeddings using Candle and models from Hugging Face☆37Updated last year
- ☆23Updated 2 months ago
- Locality Sensitive Hashing☆72Updated 2 years ago
- Fast and versatile tokenizer for language models, compatible with SentencePiece, Tokenizers, Tiktoken and more. Supports BPE, Unigram and…☆26Updated 3 months ago
- Implementing the BitNet model in Rust☆37Updated last year
- Tensor library for Zig☆11Updated 7 months ago
- Low rank adaptation (LoRA) for Candle.☆151Updated 2 months ago
- Efficent platform for inference and serving local LLMs including an OpenAI compatible API server.☆390Updated this week
- Super-fast Structured Outputs☆330Updated last week
- Port of Andrej Karpathy's minbpe to Rust☆25Updated last year
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rust☆80Updated last year
- Sample Python extension using Rust/PyO3/tch to interact with PyTorch☆37Updated last year