ngxson / wllamaLinks
WebAssembly binding for llama.cpp - Enabling on-browser LLM inference
☆912Updated 2 weeks ago
Alternatives and similar repositories for wllama
Users that are interested in wllama are comparing it to the libraries listed below
Sorting:
- WebAssembly (Wasm) Build and Bindings for llama.cpp☆282Updated last year
- A cross-platform browser ML framework.☆718Updated 11 months ago
- Open-source LLM load balancer and serving platform for self-hosting LLMs at scale 🏓🦙☆1,337Updated last week
- ☆430Updated this week
- Chat with AI large language models running natively in your browser. Enjoy private, server-free, seamless AI conversations.☆862Updated last month
- 🕸️🦀 A WASM vector similarity search written in Rust☆1,000Updated 2 years ago
- VS Code extension for LLM-assisted code/text completion☆1,001Updated last week
- FastMLX is a high performance production ready API to host MLX models.☆331Updated 7 months ago
- EntityDB is an in-browser vector database wrapping indexedDB and Transformers.js over WebAssembly☆226Updated 5 months ago
- Run Large-Language Models (LLMs) 🚀 directly in your browser!☆219Updated last year
- LM Studio Apple MLX engine☆799Updated this week
- Vercel and web-llm template to run wasm models directly in the browser.☆164Updated last year
- The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM …☆599Updated 8 months ago
- The easiest & fastest way to run customized and fine-tuned LLMs locally or on the edge☆1,516Updated last week
- Big & Small LLMs working together☆1,184Updated this week
- Local AI API Platform☆2,758Updated 3 months ago
- MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.☆1,733Updated this week
- Vectra is a local vector database for Node.js with features similar to pinecone but built using local files.☆530Updated 5 months ago
- On-device LLM Inference Powered by X-Bit Quantization☆269Updated 2 months ago
- A collection of 🤗 Transformers.js demos and example applications☆1,789Updated 3 weeks ago
- A JavaScript library that brings vector search and RAG to your browser!☆151Updated last year
- An application for running LLMs locally on your device, with your documents, facilitating detailed citations in generated responses.☆617Updated 11 months ago
- An extremely fast implementation of whisper optimized for Apple Silicon using MLX.☆796Updated last year
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆721Updated last week
- SemanticFinder - frontend-only live semantic search with transformers.js☆300Updated 6 months ago
- Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.☆456Updated 8 months ago
- TypeScript generator for llama.cpp Grammar directly from TypeScript interfaces☆140Updated last year
- MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. I…☆583Updated last week
- GGUF implementation in C as a library and a tools CLI program☆291Updated last month
- Gemma 2 optimized for your local machine.☆376Updated last year