zurawiki / tiktoken-rs
Ready-made tokenizer library for working with GPT and tiktoken
☆277Updated last week
Alternatives and similar repositories for tiktoken-rs:
Users that are interested in tiktoken-rs are comparing it to the libraries listed below
- Rust client for Qdrant vector search engine☆245Updated this week
- Rust library for generating vector embeddings, reranking locally☆397Updated this week
- pgvector support for Rust☆140Updated 2 months ago
- Fast, streaming indexing, query, and agent library for building LLM applications in Rust☆348Updated this week
- OpenAI API client library for Rust (unofficial)☆351Updated this week
- An unofficial Rust library for the OpenAI API☆76Updated this week
- Llama2 LLM ported to Rust burn☆278Updated 9 months ago
- Rust client for the huggingface hub aiming for minimal subset of features over `huggingface-hub` python package☆169Updated this week
- An Approximate Nearest Neighbors library in Rust, based on random projections and LMDB and optimized for memory usage☆234Updated last week
- LLama.cpp rust bindings☆354Updated 6 months ago
- A Rust implementation of OpenAI's Whisper model using the burn framework☆284Updated 8 months ago
- Rust library for OpenAI☆1,243Updated this week
- LLM Orchestrator built in Rust☆267Updated 10 months ago
- Tutorial for Porting PyTorch Transformer Models to Candle (Rust)☆272Updated 5 months ago
- A simple Rust library for OpenAI API, free from complex async operations and redundant dependencies.☆118Updated 6 months ago
- Rust multiprovider generative AI client (Ollama, OpenAi, Anthropic, Groq, Gemini, Cohere, ...)☆269Updated this week
- Rust implementation of the HNSW algorithm (Malkov-Yashunin)☆158Updated 3 weeks ago
- High-level, optionally asynchronous Rust bindings to llama.cpp☆194Updated 7 months ago
- ONNX neural network inference engine☆143Updated this week
- Inference Llama 2 in one file of pure Rust 🦀☆231Updated last year
- Rust language bindings for Faiss☆206Updated 3 months ago
- A well-maintained fork of the dotenv crate☆801Updated 3 months ago
- Efficent platform for inference and serving local LLMs including an OpenAI compatible API server.☆289Updated this week
- A high performance, zero-copy URL router.☆416Updated 2 weeks ago
- ☆255Updated last month
- In-memory vector store with efficient read and write performance for semantic caching and retrieval system. Redis for Semantic Caching.☆358Updated last month
- Rust bindings to https://github.com/ggerganov/whisper.cpp☆733Updated last month
- Low rank adaptation (LoRA) for Candle.☆134Updated 4 months ago
- An async client for Valkey and Redis☆419Updated 2 weeks ago
- Rust+OpenCL+AVX2 implementation of LLaMA inference code☆541Updated 11 months ago