kayvr / token-hawkLinks
WebGPU LLM inference tuned by hand
☆151Updated 2 years ago
Alternatives and similar repositories for token-hawk
Users that are interested in token-hawk are comparing it to the libraries listed below
Sorting:
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated 2 years ago
- Extend the original llama.cpp repo to support redpajama model.☆118Updated 11 months ago
- LLM-based code completion engine☆194Updated 6 months ago
- Command-line script for inferencing from models such as MPT-7B-Chat☆100Updated 2 years ago
- Drop in replacement for OpenAI, but with Open models.☆152Updated 2 years ago
- Full finetuning of large language models without large memory requirements☆94Updated last year
- TypeScript generator for llama.cpp Grammar directly from TypeScript interfaces☆139Updated last year
- Falcon LLM ggml framework with CPU and GPU support☆246Updated last year
- An implementation of bucketMul LLM inference☆221Updated last year
- SoTA Transformers with C-backend for fast inference on your CPU.☆309Updated last year
- LLaVA server (llama.cpp).☆181Updated last year
- GRDN.AI app for garden optimization☆70Updated last year
- ☆40Updated 2 years ago
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆110Updated 2 years ago
- Tensor library for machine learning☆275Updated 2 years ago
- Add local LLMs to your Web or Electron apps! Powered by Rust + WebGPU☆102Updated 2 years ago
- Generates grammer files from typescript for LLM generation☆38Updated last year
- The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER).…☆121Updated 2 years ago
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆71Updated 2 years ago
- C++ implementation for 💫StarCoder☆456Updated last year
- Run inference on replit-3B code instruct model using CPU☆157Updated 2 years ago
- LLaMa retrieval plugin script using OpenAI's retrieval plugin☆323Updated 2 years ago
- Stop messing around with finicky sampling parameters and just use DRµGS!☆351Updated last year
- Embeddings focused small version of Llama NLP model☆103Updated 2 years ago
- Python bindings for ggml☆143Updated 11 months ago
- Command-line script for inferencing from models such as falcon-7b-instruct☆75Updated 2 years ago
- Run GGML models with Kubernetes.☆173Updated last year
- ☆111Updated last year
- GPU accelerated client-side embeddings for vector search, RAG etc.☆66Updated last year
- Mistral7B playing DOOM☆133Updated last year