second-state / WasmEdge-WASINN-examplesLinks
☆255Updated this week
Alternatives and similar repositories for WasmEdge-WASINN-examples
Users that are interested in WasmEdge-WASINN-examples are comparing it to the libraries listed below
Sorting:
- OpenAI compatible API for serving LLAMA-2 model☆218Updated last year
- Neural Network proposal for WASI☆519Updated 10 months ago
- The easiest & fastest way to run customized and fine-tuned LLMs locally or on the edge☆1,502Updated this week
- Approx nearest neighbor search in Rust☆166Updated 2 years ago
- A cross-platform browser ML framework.☆718Updated 10 months ago
- Web-optimized vector database (written in Rust).☆255Updated 6 months ago
- Vercel and web-llm template to run wasm models directly in the browser.☆160Updated last year
- Lightweight database clients in the WasmEdge Runtime☆71Updated last year
- ☆138Updated last year
- Tensor library for machine learning☆274Updated 2 years ago
- The Google mediapipe AI library. Write AI inference applications for image recognition, text classification, audio / video processing and…☆204Updated 11 months ago
- LLM Orchestrator built in Rust☆283Updated last year
- Efficent platform for inference and serving local LLMs including an OpenAI compatible API server.☆469Updated this week
- JS tokenizer for LLaMA 1 and 2☆359Updated last year
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rust☆79Updated last year
- Inference Llama 2 in one file of pure Rust 🦀☆234Updated 2 years ago
- Rust implementation of Surya☆60Updated 6 months ago
- The Easiest Rust Interface for Local LLMs and an Interface for Deterministic Signals from Probabilistic LLM Vibes☆235Updated last month
- 🦀Rust + Large Language Models - Make AI Services Freely and Easily.☆182Updated last year
- Simple Rust applications that run in WasmEdge☆33Updated last year
- WebAssembly (Wasm) Build and Bindings for llama.cpp☆280Updated last year
- The MCP enterprise actors-based server or mcp-ectors for short☆31Updated 3 months ago
- Rust framework for LLM orchestration☆203Updated last year
- 🚀 Develop and run serverless applications on WebAssembly☆52Updated last year
- Run any ML model from any programming language.☆423Updated last year
- ☆132Updated last year
- a fast cross platform AI inference engine 🤖 using Rust 🦀 and WebGPU 🎮☆462Updated 8 months ago
- A template project to demonstrate how to run WebAssembly functions as sidecar microservices in dapr☆285Updated last year
- A fast llama2 decoder in pure Rust.☆1,055Updated last year
- Add local LLMs to your Web or Electron apps! Powered by Rust + WebGPU☆103Updated 2 years ago