IntrinsicLabsAI / gbnfgen
TypeScript generator for llama.cpp Grammar directly from TypeScript interfaces
☆135Updated 9 months ago
Alternatives and similar repositories for gbnfgen:
Users that are interested in gbnfgen are comparing it to the libraries listed below
- Generates grammer files from typescript for LLM generation☆37Updated last year
- Converts JSON-Schema to GBNF grammar to use with llama.cpp☆52Updated last year
- LLM-based code completion engine☆185Updated 3 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- JS tokenizer for LLaMA 1 and 2☆351Updated 10 months ago
- WebGPU LLM inference tuned by hand☆149Updated last year
- ☆135Updated last year
- Merge Transformers language models by use of gradient parameters.☆208Updated 8 months ago
- A Javascript library (with Typescript types) to parse metadata of GGML based GGUF files.☆47Updated 9 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- ☆154Updated 9 months ago
- Command-line script for inferencing from models such as MPT-7B-Chat☆101Updated last year
- The one who calls upon functions - Function-Calling Language Model☆36Updated last year
- An implementation of bucketMul LLM inference☆217Updated 10 months ago
- Plug n Play GBNF Compiler for llama.cpp☆25Updated last year
- SemanticFinder - frontend-only live semantic search with transformers.js☆268Updated last month
- ☆40Updated 2 years ago
- Extend the original llama.cpp repo to support redpajama model.☆117Updated 8 months ago
- Constrained Decoding for LLMs against JSON Schema☆327Updated last year
- GPU accelerated client-side embeddings for vector search, RAG etc.☆66Updated last year
- Add local LLMs to your Web or Electron apps! Powered by Rust + WebGPU☆102Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆173Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆237Updated 11 months ago
- LLaVA server (llama.cpp).☆180Updated last year
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆223Updated last year
- Falcon LLM ggml framework with CPU and GPU support☆246Updated last year
- A guidance compatibility layer for llama-cpp-python☆34Updated last year
- ☆31Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated last year
- Use context-free grammars with an LLM☆168Updated last year