ngxson / wllama
WebAssembly binding for llama.cpp - Enabling in-browser LLM inference
☆430Updated last week
Related projects ⓘ
Alternatives and complementary repositories for wllama
- WebAssembly (Wasm) Build and Bindings for llama.cpp☆208Updated 3 months ago
- A cross-platform browser ML framework.☆616Updated this week
- TypeScript generator for llama.cpp Grammar directly from TypeScript interfaces☆131Updated 4 months ago
- The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM …☆488Updated 3 months ago
- FastMLX is a high performance production ready API to host MLX models.☆212Updated last week
- An application for running LLMs locally on your device, with your documents, facilitating detailed citations in generated responses.☆487Updated last week
- Stateful load balancer custom-tailored for llama.cpp☆556Updated this week
- 🕸️🦀 A WASM vector similarity search written in Rust☆876Updated last year
- ☆705Updated last month
- LLM-based code completion engine☆173Updated last year
- Efficient visual programming for AI language models☆298Updated last month
- Fast parallel LLM inference for MLX☆146Updated 4 months ago
- Open source LLM UI, compatible with all local LLM providers.☆165Updated last month
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆235Updated 2 months ago
- MLX-VLM is a package for running Vision LLMs locally on your Mac using MLX.☆471Updated this week
- Vercel and web-llm template to run wasm models directly in the browser.☆121Updated 11 months ago
- VSCode AI coding assistant powered by self-hosted llama.cpp endpoint.☆158Updated 3 months ago
- ☆148Updated 3 months ago
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆225Updated last month
- ☆634Updated 2 weeks ago
- ☆83Updated this week
- A ggml (C++) re-implementation of tortoise-tts☆155Updated 2 months ago
- Implementation of F5-TTS in MLX☆309Updated last week
- Local semantic search. Stupidly simple.☆389Updated 4 months ago
- llama.cpp with BakLLaVA model describes what does it see☆380Updated last year
- 👾🍎 Apple MLX engine for LM Studio☆171Updated this week
- Chat with AI large language models running natively in your browser. Enjoy private, server-free, seamless AI conversations.☆311Updated this week
- Python bindings for ggml☆132Updated 2 months ago
- LLM-powered lossless compression tool☆252Updated 2 months ago
- A multimodal, function calling powered LLM webui.☆205Updated last month