edgenai / llama_cpp-rsLinks
High-level, optionally asynchronous Rust bindings to llama.cpp
☆228Updated last year
Alternatives and similar repositories for llama_cpp-rs
Users that are interested in llama_cpp-rs are comparing it to the libraries listed below
Sorting:
- LLama.cpp rust bindings☆398Updated last year
- ☆342Updated this week
- Low rank adaptation (LoRA) for Candle.☆154Updated 4 months ago
- Rust client for the huggingface hub aiming for minimal subset of features over `huggingface-hub` python package☆219Updated 2 months ago
- Llama2 LLM ported to Rust burn☆280Updated last year
- Efficent platform for inference and serving local LLMs including an OpenAI compatible API server.☆431Updated last week
- The Easiest Rust Interface for Local LLMs and an Interface for Deterministic Signals from Probabilistic LLM Vibes☆229Updated 2 weeks ago
- ONNX neural network inference engine☆224Updated this week
- Inference Llama 2 in one file of pure Rust 🦀☆233Updated last year
- LLM Orchestrator built in Rust☆279Updated last year
- Stable Diffusion v1.4 ported to Rust's burn framework☆341Updated 10 months ago
- A Rust implementation of OpenAI's Whisper model using the burn framework☆321Updated last year
- Rust+OpenCL+AVX2 implementation of LLaMA inference code☆548Updated last year
- Rust bindings to https://github.com/k2-fsa/sherpa-onnx☆206Updated 3 months ago
- Tutorial for Porting PyTorch Transformer Models to Candle (Rust)