kayvr / token-hawkLinks
WebGPU LLM inference tuned by hand
☆151Updated 2 years ago
Alternatives and similar repositories for token-hawk
Users that are interested in token-hawk are comparing it to the libraries listed below
Sorting:
- An implementation of bucketMul LLM inference☆224Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated 4 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆124Updated 2 years ago
- Extend the original llama.cpp repo to support redpajama model.☆118Updated last year
- Falcon LLM ggml framework with CPU and GPU support☆249Updated 2 years ago
- Command-line script for inferencing from models such as MPT-7B-Chat☆100Updated 2 years ago
- TypeScript generator for llama.cpp Grammar directly from TypeScript interfaces☆141Updated last year
- LLaVA server (llama.cpp).☆183Updated 2 years ago
- LLM-based code completion engine☆190Updated last year
- Command-line script for inferencing from models such as falcon-7b-instruct☆75Updated 2 years ago
- SoTA Transformers with C-backend for fast inference on your CPU.☆311Updated 2 years ago
- Run inference on replit-3B code instruct model using CPU☆160Updated 2 years ago
- Run GGML models with Kubernetes.☆175Updated 2 years ago
- Generates grammer files from typescript for LLM generation☆38Updated last year
- Stop messing around with finicky sampling parameters and just use DRµGS!☆360Updated last year
- Drop in replacement for OpenAI, but with Open models.☆154Updated 2 years ago
- JS tokenizer for LLaMA 1 and 2☆363Updated last year
- Experimental LLM Inference UX to aid in creative writing☆128Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated 2 years ago
- Tensor library for machine learning☆273Updated 2 years ago
- Embeddings focused small version of Llama NLP model☆107Updated 2 years ago
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆109Updated 2 years ago
- Mistral7B playing DOOM☆139Updated last year
- GRDN.AI app for garden optimization☆69Updated 2 months ago
- The code we currently use to fine-tune models.☆117Updated last year
- Python bindings for ggml☆147Updated last year
- GPU accelerated client-side embeddings for vector search, RAG etc.☆65Updated 2 years ago
- Preprint: Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆28Updated 2 years ago
- Add local LLMs to your Web or Electron apps! Powered by Rust + WebGPU☆106Updated 2 years ago
- LLaMa retrieval plugin script using OpenAI's retrieval plugin☆323Updated 2 years ago