IntrinsicLabsAI / gbnfgenLinks
TypeScript generator for llama.cpp Grammar directly from TypeScript interfaces
☆141Updated last year
Alternatives and similar repositories for gbnfgen
Users that are interested in gbnfgen are comparing it to the libraries listed below
Sorting:
- Generates grammer files from typescript for LLM generation☆38Updated last year
- JS tokenizer for LLaMA 1 and 2☆362Updated last year
- Converts JSON-Schema to GBNF grammar to use with llama.cpp☆55Updated 2 years ago
- WebGPU LLM inference tuned by hand☆151Updated 2 years ago
- A Javascript library (with Typescript types) to parse metadata of GGML based GGUF files.☆51Updated last year
- GPU accelerated client-side embeddings for vector search, RAG etc.☆65Updated 2 years ago
- SemanticFinder - frontend-only live semantic search with transformers.js☆308Updated 8 months ago
- LLM-based code completion engine☆190Updated 10 months ago
- An HTTP serving framework by Banana☆101Updated last year
- Enforce structured output from LLMs 100% of the time☆248Updated last year
- Extend the original llama.cpp repo to support redpajama model.☆118Updated last year
- Command-line script for inferencing from models such as MPT-7B-Chat☆100Updated 2 years ago
- Constrained Decoding for LLMs against JSON Schema☆329Updated 2 years ago
- WebAssembly (Wasm) Build and Bindings for llama.cpp☆284Updated last year
- JavaScript bindings for the ggml-js library☆44Updated 3 weeks ago
- Tensor library for machine learning☆274Updated 2 years ago
- LLaMa retrieval plugin script using OpenAI's retrieval plugin☆324Updated 2 years ago
- Run GGML models with Kubernetes.☆175Updated last year
- Use context-free grammars with an LLM☆175Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆124Updated 2 years ago
- ☆114Updated last year
- Add local LLMs to your Web or Electron apps! Powered by Rust + WebGPU☆106Updated 2 years ago
- ☆135Updated 2 years ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- A client side vector search library that can embed, store, search, and cache vectors. Works on the browser and node. It outperforms OpenA…☆222Updated last year
- Exact structure out of any language model completion.☆515Updated 2 years ago
- Falcon LLM ggml framework with CPU and GPU support☆248Updated last year
- JS tokenizer for LLaMA 3 and LLaMA 3.1☆117Updated 4 months ago
- Browser-compatible JS library for running language models☆232Updated 3 years ago
- iterate quickly with llama.cpp hot reloading. use the llama.cpp bindings with bun.sh☆50Updated 2 years ago