IntrinsicLabsAI / grammar-builderLinks
Generates grammer files from typescript for LLM generation
☆38Updated last year
Alternatives and similar repositories for grammar-builder
Users that are interested in grammar-builder are comparing it to the libraries listed below
Sorting:
- TypeScript generator for llama.cpp Grammar directly from TypeScript interfaces☆141Updated last year
- The one who calls upon functions - Function-Calling Language Model☆36Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆124Updated 2 years ago
- ☆32Updated 2 years ago
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated last year
- ☆54Updated 2 years ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated 2 years ago
- ☆135Updated 2 years ago
- ☆119Updated last year
- The code we currently use to fine-tune models.☆117Updated last year
- ☆47Updated last year
- Replace expensive LLM calls with finetunes automatically☆66Updated last year
- Converts JSON-Schema to GBNF grammar to use with llama.cpp☆55Updated 2 years ago
- ☆24Updated 2 years ago
- Generate visual podcasts about novels using open source models☆25Updated 2 years ago
- ☆38Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated 4 months ago
- Plug n Play GBNF Compiler for llama.cpp☆28Updated 2 years ago
- Unofficial python bindings for the rust llm library. 🐍❤️🦀☆76Updated 2 years ago
- ☆23Updated last year
- ☆166Updated 5 months ago
- Command-line script for inferencing from models such as falcon-7b-instruct☆75Updated 2 years ago
- Deploy your GGML models to HuggingFace Spaces with Docker and gradio☆38Updated 2 years ago
- utilities for loading and running text embeddings with onnx☆45Updated 5 months ago
- A Javascript library (with Typescript types) to parse metadata of GGML based GGUF files.☆51Updated last year
- ☆35Updated 2 years ago
- Command-line script for inferencing from models such as MPT-7B-Chat☆100Updated 2 years ago
- A guidance compatibility layer for llama-cpp-python☆36Updated 2 years ago
- GPU accelerated client-side embeddings for vector search, RAG etc.☆65Updated 2 years ago
- ☆68Updated last year