nomic-ai / kompute
General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases. Backed by the Linux Foundation.
☆45Updated 4 months ago
Alternatives and similar repositories for kompute:
Users that are interested in kompute are comparing it to the libraries listed below
- GGML implementation of BERT model with Python bindings and quantization.☆53Updated last year
- Inference of Mamba models in pure C☆183Updated 11 months ago
- Course Project for COMP4471 on RWKV☆17Updated last year
- Experiments with BitNet inference on CPU☆53Updated 10 months ago
- llama.cpp fork with additional SOTA quants and improved performance☆155Updated this week
- Python bindings for ggml☆137Updated 5 months ago
- 1.58-bit LLaMa model☆82Updated 10 months ago
- RWKV in nanoGPT style☆187Updated 8 months ago
- A minimalistic C++ Jinja templating engine for LLM chat templates☆120Updated this week
- Port of Suno AI's Bark in C/C++ for fast inference☆55Updated 10 months ago
- llama.cpp fork used by GPT4All☆52Updated this week
- asynchronous/distributed speculative evaluation for llama3☆37Updated 6 months ago
- Testing LLM reasoning abilities with family relationship quizzes.☆57Updated 3 weeks ago
- ☆53Updated 7 months ago
- Train your own small bitnet model☆64Updated 4 months ago
- Editor with LLM generation tree exploration☆62Updated last week
- cortex.llamacpp is a high-efficiency C++ inference engine for edge computing. It is a dynamic library that can be loaded by any server a…☆36Updated this week
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆39Updated 8 months ago
- Embeddings focused small version of Llama NLP model☆103Updated last year
- ggml implementation of embedding models including SentenceTransformer and BGE☆54Updated last year
- Easy to use, High Performant Knowledge Distillation for LLMs☆48Updated last month
- Stable Diffusion in pure C/C++☆60Updated last year
- AirLLM 70B inference with single 4GB GPU☆12Updated 6 months ago
- Inference Llama 2 in one file of pure C++☆81Updated last year
- Local ML voice chat using high-end models.☆159Updated this week
- ☆65Updated 8 months ago
- tinygrad port of the RWKV large language model.☆44Updated 8 months ago
- Port of Microsoft's BioGPT in C/C++ using ggml☆87Updated last year