vinhowe / pistonLinks
Train small sequence models in your browser with WebGPU.
☆32Updated 2 months ago
Alternatives and similar repositories for piston
Users that are interested in piston are comparing it to the libraries listed below
Sorting:
- ☆71Updated last year
- Experimental compiler for deep learning models☆75Updated 4 months ago
- Implementing the BitNet model in Rust☆44Updated last year
- ☆58Updated 2 years ago
- A high-performance constrained decoding engine based on context free grammar in Rust☆58Updated 8 months ago
- ☆19Updated last month
- It’s a pure safe BTree that can be used to build your own special-purpose btree data structure☆58Updated 4 months ago
- ☆12Updated last year
- A native Jupyter notebook frontend with local + remote kernels, reactive cells, and IDE features, implemented in Rust☆133Updated last year
- Proof of concept for a generative AI application framework powered by WebAssembly and Extism☆14Updated 2 years ago
- trying to make WebGPU a bit easier to use☆19Updated 2 years ago
- ☆15Updated 9 months ago
- Add local LLMs to your Web or Electron apps! Powered by Rust + WebGPU☆106Updated 2 years ago
- ☆15Updated last year
- GPU accelerated client-side embeddings for vector search, RAG etc.☆65Updated 2 years ago
- WebAssembly Component Model based REPL with sandboxed multi-language plugin system - unified codebase runs in CLI (Rust) and web (TypeScr…☆53Updated 3 months ago
- TensorRT-LLM server with Structured Outputs (JSON) built with Rust☆66Updated 9 months ago
- Tiny Autograd engine written in Rust☆60Updated last year
- An extensible CLI for integrating LLM models with a flexible scripting system☆22Updated last year
- Rust port of annoy (https://github.com/spotify/annoy)☆45Updated 5 months ago
- Experimental wasm32-unknown-wasi runtime for Python code execution☆40Updated last year
- iterate quickly with llama.cpp hot reloading. use the llama.cpp bindings with bun.sh☆50Updated 2 years ago
- Accompanying source code for the article: How to build a Semantic Search Engine in Rust☆23Updated 3 years ago
- Inference Llama 2 in one file of zero-dependency, zero-unsafe Rust☆40Updated 2 years ago
- It's a baby compiler. (Lean btw.)☆16Updated 8 months ago
- Bridging the gap between Wasm and native code☆18Updated this week
- wasm bindings for huggingface tokenizers library☆34Updated 3 years ago
- jsgrad is a dependency-free ML library in Typescript for model inference and training with support to WebGPU and other runtimes.☆62Updated 9 months ago
- Experimental ONNX implementation for WASI NN.☆48Updated 4 years ago
- Structured outputs for LLMs☆53Updated last year