spirobel / bunny-llamaLinks
iterate quickly with llama.cpp hot reloading. use the llama.cpp bindings with bun.sh
☆50Updated 2 years ago
Alternatives and similar repositories for bunny-llama
Users that are interested in bunny-llama are comparing it to the libraries listed below
Sorting:
- Extracts structured data from unstructured input. Programming language agnostic. Uses llama.cpp☆45Updated last year
- WebGPU LLM inference tuned by hand☆151Updated 2 years ago
- Add local LLMs to your Web or Electron apps! Powered by Rust + WebGPU☆107Updated 2 years ago
- GPU accelerated client-side embeddings for vector search, RAG etc.☆65Updated 2 years ago
- ☆62Updated last year
- TypeScript generator for llama.cpp Grammar directly from TypeScript interfaces☆141Updated last year
- What if an HNSW index was just a file, and you could serve it from a CDN, and search it directly in the browser?☆109Updated 9 months ago
- A library for incremental loading of large PyTorch checkpoints☆56Updated 2 years ago
- A Javascript library (with Typescript types) to parse metadata of GGML based GGUF files.☆51Updated last year
- GGML implementation of BERT model with Python bindings and quantization.☆58Updated last year
- Port of Microsoft's BioGPT in C/C++ using ggml☆85Updated last year
- JavaScript bindings for the ggml-js library☆45Updated 2 months ago
- Local Startup Advisor Chatbot☆32Updated 2 years ago
- Generates grammer files from typescript for LLM generation☆38Updated last year
- Turing machines, Rule 110, and A::B reversal using Claude 3 Opus.☆58Updated last year
- Web browser version of StarCoder.cpp☆45Updated 2 years ago
- A clone of OpenAI's Tokenizer page for HuggingFace Models☆45Updated 2 years ago
- ☆140Updated last year
- llama.cpp gguf file parser for javascript☆50Updated last year
- ☆31Updated 2 years ago
- Proof of concept for a generative AI application framework powered by WebAssembly and Extism☆14Updated 2 years ago
- tinygrad port of the RWKV large language model.☆45Updated 10 months ago
- asynchronous/distributed speculative evaluation for llama3☆39Updated last year
- Run AI models anywhere. https://muna.ai/explore☆75Updated this week
- Light WebUI for lm.rs☆24Updated last year
- Because it's there.☆16Updated last year
- utilities for loading and running text embeddings with onnx☆45Updated 5 months ago
- Editor with LLM generation tree exploration☆81Updated 11 months ago
- Unleash the full potential of exascale LLMs on consumer-class GPUs, proven by extensive benchmarks, with no long-term adjustments and min…☆26Updated last year
- trying to make WebGPU a bit easier to use☆18Updated 2 years ago