IntrinsicLabsAI / gbnfgenLinks
TypeScript generator for llama.cpp Grammar directly from TypeScript interfaces
☆137Updated 11 months ago
Alternatives and similar repositories for gbnfgen
Users that are interested in gbnfgen are comparing it to the libraries listed below
Sorting:
- Generates grammer files from typescript for LLM generation☆38Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- Converts JSON-Schema to GBNF grammar to use with llama.cpp☆55Updated last year
- JS tokenizer for LLaMA 1 and 2☆353Updated 11 months ago
- Run GGML models with Kubernetes.☆173Updated last year
- ☆135Updated last year
- A Javascript library (with Typescript types) to parse metadata of GGML based GGUF files.☆47Updated 10 months ago
- LLM-based code completion engine☆194Updated 5 months ago
- LLaMa retrieval plugin script using OpenAI's retrieval plugin☆324Updated 2 years ago
- Add local LLMs to your Web or Electron apps! Powered by Rust + WebGPU☆102Updated 2 years ago
- Plug n Play GBNF Compiler for llama.cpp☆25Updated last year
- Constrained Decoding for LLMs against JSON Schema☆327Updated 2 years ago
- Use context-free grammars with an LLM☆170Updated last year
- GPU accelerated client-side embeddings for vector search, RAG etc.☆66Updated last year
- WebGPU LLM inference tuned by hand☆151Updated last year
- ☆157Updated 11 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated 2 years ago
- Merge Transformers language models by use of gradient parameters.☆206Updated 10 months ago
- Simple embedding -> text model trained on a small subset of Wikipedia sentences.☆152Updated last year
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated last year
- Command-line script for inferencing from models such as MPT-7B-Chat☆101Updated last year
- Enforce structured output from LLMs 100% of the time☆249Updated 11 months ago
- Visual Studio Code extension for WizardCoder☆148Updated last year
- Extend the original llama.cpp repo to support redpajama model.☆117Updated 9 months ago
- LLaVA server (llama.cpp).☆180Updated last year
- ☆40Updated 2 years ago
- iterate quickly with llama.cpp hot reloading. use the llama.cpp bindings with bun.sh☆49Updated last year
- An implementation of bucketMul LLM inference☆217Updated 11 months ago
- The one who calls upon functions - Function-Calling Language Model☆36Updated last year
- A guidance compatibility layer for llama-cpp-python☆35Updated last year