mithril-security / tokenizers-wasmLinks
wasm bindings for huggingface tokenizers library
☆34Updated 3 years ago
Alternatives and similar repositories for tokenizers-wasm
Users that are interested in tokenizers-wasm are comparing it to the libraries listed below
Sorting:
- GPU accelerated client-side embeddings for vector search, RAG etc.☆65Updated 2 years ago
- ☆39Updated 3 years ago
- Fast and versatile tokenizer for language models, compatible with SentencePiece, Tokenizers, Tiktoken and more. Supports BPE, Unigram and…☆39Updated 2 months ago
- High-performance MinHash implementation in Rust with Python bindings for efficient similarity estimation and deduplication of large datas…☆223Updated 3 weeks ago
- Simple high-throughput inference library☆153Updated 7 months ago
- Run ONNX and TensorFlow inference in the browser.☆75Updated 2 years ago
- Web browser version of StarCoder.cpp☆45Updated 2 years ago
- Unofficial python bindings for the rust llm library. 🐍❤️🦀☆76Updated 2 years ago
- GGML implementation of BERT model with Python bindings and quantization.☆58Updated last year
- ReLM is a Regular Expression engine for Language Models☆107Updated 2 years ago
- A complete(grpc service and lib) Rust inference with multilingual embedding support. This version leverages the power of Rust for both GR…☆39Updated last year
- Add local LLMs to your Web or Electron apps! Powered by Rust + WebGPU☆107Updated 2 years ago
- Python bindings for ggml☆146Updated last year
- Download full or partial git-lfs repos without temporarily using 2x disk space☆30Updated 2 years ago
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- ☆140Updated last year
- ☆157Updated 2 years ago
- Experiments on speculative sampling with Llama models☆127Updated 2 years ago
- ☆135Updated last year
- A high-performance constrained decoding engine based on context free grammar in Rust☆56Updated 7 months ago
- Code repository for the paper - "AdANNS: A Framework for Adaptive Semantic Search"☆65Updated 2 years ago
- ☆35Updated 2 years ago
- Tune MPTs☆84Updated 2 years ago
- experiments with inference on llama☆103Updated last year
- Optimizing bit-level Jaccard Index and Population Counts for large-scale quantized Vector Search via Harley-Seal CSA and Lookup Tables☆21Updated 7 months ago
- PyLate efficient inference engine☆68Updated 3 months ago
- Serialize JAX, Flax, Haiku, or Objax model params with 🤗`safetensors`☆47Updated last year
- Latent Large Language Models☆19Updated last year
- Advanced Ultra-Low Bitrate Compression Techniques for the LLaMA Family of LLMs☆110Updated last year
- TensorRT-LLM server with Structured Outputs (JSON) built with Rust☆64Updated 8 months ago