IntrinsicLabsAI / gbnfgen
TypeScript generator for llama.cpp Grammar directly from TypeScript interfaces
☆126Updated 2 months ago
Related projects: ⓘ
- Generates grammer files from typescript for LLM generation☆32Updated 7 months ago
- Converts JSON-Schema to GBNF grammar to use with llama.cpp☆49Updated 9 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆117Updated 8 months ago
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆223Updated 4 months ago
- ☆144Updated 2 months ago
- Run GGML models with Kubernetes.☆172Updated 9 months ago
- ☆36Updated 6 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- ☆133Updated 9 months ago
- Merge Transformers language models by use of gradient parameters.☆193Updated last month
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vecto…☆192Updated 4 months ago
- WebGPU LLM inference tuned by hand☆145Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated 8 months ago
- JS tokenizer for LLaMA 1 and 2☆330Updated 2 months ago
- Command-line script for inferencing from models such as MPT-7B-Chat☆102Updated last year
- GPU accelerated client-side embeddings for vector search, RAG etc.☆63Updated 9 months ago
- LLM-based code completion engine☆172Updated last year
- Add local LLMs to your Web or Electron apps! Powered by Rust + WebGPU☆102Updated last year
- 🤖 Headless IDE for AI agents☆110Updated this week
- WebAssembly binding for llama.cpp - Enabling in-browser LLM inference☆342Updated last week
- A simple Python sandbox for helpful LLM data agents☆143Updated 3 months ago
- An HTTP serving framework by Banana☆97Updated 9 months ago
- Low-Rank adapter extraction for fine-tuned transformers model☆154Updated 4 months ago
- A guidance compatibility layer for llama-cpp-python☆35Updated last year
- ☆47Updated this week
- Extend the original llama.cpp repo to support redpajama model.☆117Updated 2 weeks ago
- A fast batching API to serve LLM models☆172Updated 4 months ago
- ☆32Updated 8 months ago
- Formatron empowers everyone to control the format of language models' output with minimal overhead.☆116Updated this week