kayvr / token-hawkLinks
WebGPU LLM inference tuned by hand
☆151Updated 2 years ago
Alternatives and similar repositories for token-hawk
Users that are interested in token-hawk are comparing it to the libraries listed below
Sorting:
- Extend the original llama.cpp repo to support redpajama model.☆118Updated last year
- An implementation of bucketMul LLM inference☆223Updated last year
- LLaVA server (llama.cpp).☆183Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆124Updated 2 years ago
- Command-line script for inferencing from models such as MPT-7B-Chat☆100Updated 2 years ago
- SoTA Transformers with C-backend for fast inference on your CPU.☆311Updated 2 years ago
- Full finetuning of large language models without large memory requirements☆94Updated 3 months ago
- TypeScript generator for llama.cpp Grammar directly from TypeScript interfaces☆141Updated last year
- Command-line script for inferencing from models such as falcon-7b-instruct☆75Updated 2 years ago
- ☆40Updated 2 years ago
- LLM-based code completion engine☆190Updated 11 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated 2 years ago
- Falcon LLM ggml framework with CPU and GPU support☆249Updated last year
- Tensor library for machine learning☆274Updated 2 years ago
- Embeddings focused small version of Llama NLP model☆107Updated 2 years ago
- Mistral7B playing DOOM☆138Updated last year
- Run GGML models with Kubernetes.☆175Updated 2 years ago
- Drop in replacement for OpenAI, but with Open models.☆154Updated 2 years ago
- Run inference on replit-3B code instruct model using CPU☆160Updated 2 years ago
- tinygrad port of the RWKV large language model.☆45Updated 10 months ago
- ☆112Updated 2 years ago
- Generates grammer files from typescript for LLM generation☆38Updated last year
- Stop messing around with finicky sampling parameters and just use DRµGS!☆360Updated last year
- JS tokenizer for LLaMA 1 and 2☆362Updated last year
- The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER).…☆121Updated 2 years ago
- C++ implementation for 💫StarCoder☆459Updated 2 years ago
- Preprint: Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆28Updated last year
- Experimental LLM Inference UX to aid in creative writing☆127Updated last year
- Inference of Mamba models in pure C☆195Updated last year
- Python bindings for ggml☆146Updated last year