nomic-ai / kompute
General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases. Backed by the Linux Foundation.
☆46Updated 2 months ago
Alternatives and similar repositories for kompute
Users that are interested in kompute are comparing it to the libraries listed below
Sorting:
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆72Updated 3 months ago
- A minimalistic C++ Jinja templating engine for LLM chat templates☆138Updated last week
- Python bindings for ggml☆140Updated 8 months ago
- GGML implementation of BERT model with Python bindings and quantization.☆56Updated last year
- Experiments with BitNet inference on CPU☆55Updated last year
- Inference of Mamba models in pure C☆188Updated last year
- Course Project for COMP4471 on RWKV☆17Updated last year
- llama.cpp fork used by GPT4All☆55Updated 2 months ago
- Port of Microsoft's BioGPT in C/C++ using ggml☆88Updated last year
- Embeddings focused small version of Llama NLP model☆104Updated 2 years ago
- Lightweight Llama 3 8B Inference Engine in CUDA C☆47Updated last month
- Onboarding documentation source for the AMD Ryzen™ AI Software Platform. The AMD Ryzen™ AI Software Platform enables developers to take…☆63Updated this week
- GPT2 implementation in C++ using Ort☆26Updated 4 years ago
- Thin wrapper around GGML to make life easier☆29Updated this week
- 1.58-bit LLaMa model☆81Updated last year
- asynchronous/distributed speculative evaluation for llama3☆39Updated 9 months ago
- Inference Llama/Llama2/Llama3 Modes in NumPy☆20Updated last year
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 7 months ago
- Port of Suno AI's Bark in C/C++ for fast inference☆53Updated last year
- ggml implementation of embedding models including SentenceTransformer and BGE☆57Updated last year
- Editor with LLM generation tree exploration☆66Updated 3 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆86Updated this week
- ☆37Updated 2 years ago
- Simple high-throughput inference library☆46Updated this week
- Fork of llama.cpp, extended for GPT-NeoX, RWKV-v4, and Falcon models☆29Updated last year
- RWKV in nanoGPT style☆189Updated 11 months ago
- Inference RWKV v7 in pure C.☆33Updated last month
- Inference Llama 2 in one file of pure C++☆83Updated last year
- LLM training in simple, raw C/CUDA☆95Updated last year
- A fork of OpenBLAS with Armv8-A SVE (Scalable Vector Extension) support☆17Updated 5 years ago