lxe / wasm-gptLinks
Tensor library for machine learning
☆273Updated 2 years ago
Alternatives and similar repositories for wasm-gpt
Users that are interested in wasm-gpt are comparing it to the libraries listed below
Sorting:
- JS tokenizer for LLaMA 1 and 2☆361Updated last year
- Add local LLMs to your Web or Electron apps! Powered by Rust + WebGPU☆106Updated 2 years ago
- TypeScript generator for llama.cpp Grammar directly from TypeScript interfaces☆140Updated last year
- OpenAI-compatible Python client that can call any LLM☆372Updated 2 years ago
- 🦜️🔗 This is a very simple re-implementation of LangChain, in ~100 lines of code☆254Updated 2 years ago
- Run GGML models with Kubernetes.☆174Updated last year
- WebGPU LLM inference tuned by hand☆150Updated 2 years ago
- ☆144Updated 2 years ago
- Simple repo that compiles and runs llama2.c on the Web☆57Updated last year
- Revealing example of self-attention, the building block of transformer AI models☆130Updated 2 years ago
- Enforce structured output from LLMs 100% of the time☆248Updated last year
- Extend the original llama.cpp repo to support redpajama model.☆118Updated last year
- Layered, depth-first reading—start with summaries, tap to explore details, and gain clarity on complex topics.☆274Updated 2 years ago
- An implementation of bucketMul LLM inference☆223Updated last year
- Augment GPT-4 Environment Access☆284Updated 2 years ago
- Browser-compatible JS library for running language models☆232Updated 3 years ago
- Code ChatGPT Plugin is a TypeScript Code Analyzer that enables ChatGPT to "talk" with YOUR code☆240Updated last year
- Web-optimized vector database (written in Rust).☆258Updated 8 months ago
- Tool to create a dataset of semantic segmentation on website screenshots from their DOM☆89Updated 2 years ago
- Call any LLM with a single API. Zero dependencies.☆215Updated 2 years ago
- https://ermine.ai -- 100% client-side live audio transcription, powered by transformers.js☆324Updated 2 years ago
- Easy-to-use headless React Hooks to run LLMs in the browser with WebGPU. Just useLLM().☆698Updated 2 years ago
- Extensible AI assistant platform that bridges LLMs to tasks and actions☆39Updated 2 years ago
- LLaMA Cog template☆303Updated last year
- Port of MiniGPT4 in C++ (4bit, 5bit, 6bit, 8bit, 16bit CPU inference with GGML)☆568Updated 2 years ago
- Spying on Apple’s new predictive text model☆134Updated last year
- Vercel and web-llm template to run wasm models directly in the browser.☆164Updated last year
- Next-token prediction in JavaScript — build fast language and diffusion models.☆143Updated last year
- Tiny inference-only implementation of LLaMA☆92Updated last year
- Definition for Open Weights LIcensing☆145Updated last year