kayvr / token-hawkLinks
WebGPU LLM inference tuned by hand
☆150Updated 2 years ago
Alternatives and similar repositories for token-hawk
Users that are interested in token-hawk are comparing it to the libraries listed below
Sorting:
- Extend the original llama.cpp repo to support redpajama model.☆118Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated 2 years ago
- Full finetuning of large language models without large memory requirements☆93Updated last month
- Command-line script for inferencing from models such as MPT-7B-Chat☆99Updated 2 years ago
- LLaVA server (llama.cpp).☆183Updated 2 years ago
- An implementation of bucketMul LLM inference☆223Updated last year
- Command-line script for inferencing from models such as falcon-7b-instruct☆74Updated 2 years ago
- SoTA Transformers with C-backend for fast inference on your CPU.☆308Updated last year
- Falcon LLM ggml framework with CPU and GPU support☆247Updated last year
- LLM-based code completion engine☆190Updated 9 months ago
- ☆40Updated 2 years ago
- Run inference on replit-3B code instruct model using CPU☆159Updated 2 years ago
- TypeScript generator for llama.cpp Grammar directly from TypeScript interfaces☆140Updated last year
- Embeddings focused small version of Llama NLP model☆106Updated 2 years ago
- GRDN.AI app for garden optimization☆70Updated last year
- Tensor library for machine learning☆273Updated 2 years ago
- Mistral7B playing DOOM☆138Updated last year
- inference code for mixtral-8x7b-32kseqlen☆102Updated last year
- Drop in replacement for OpenAI, but with Open models.☆153Updated 2 years ago
- Python bindings for ggml☆146Updated last year
- ☆112Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- Generates grammer files from typescript for LLM generation☆38Updated last year
- tinygrad port of the RWKV large language model.☆44Updated 8 months ago
- The code we currently use to fine-tune models.☆117Updated last year
- Experimental LLM Inference UX to aid in creative writing☆125Updated 11 months ago
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆71Updated 2 years ago
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆108Updated 2 years ago
- Add local LLMs to your Web or Electron apps! Powered by Rust + WebGPU☆106Updated 2 years ago
- Modified Stanford-Alpaca Trainer for Training Replit's Code Model☆41Updated 2 years ago