nomic-ai / kompute
General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases. Backed by the Linux Foundation.
☆46Updated 2 months ago
Alternatives and similar repositories for kompute:
Users that are interested in kompute are comparing it to the libraries listed below
- GGML implementation of BERT model with Python bindings and quantization.☆56Updated last year
- A minimalistic C++ Jinja templating engine for LLM chat templates☆132Updated last week
- Port of Microsoft's BioGPT in C/C++ using ggml☆88Updated last year
- llama.cpp fork used by GPT4All☆55Updated 2 months ago
- Inference Llama/Llama2/Llama3 Modes in NumPy☆20Updated last year
- Port of Suno AI's Bark in C/C++ for fast inference☆52Updated last year
- Embeddings focused small version of Llama NLP model☆103Updated last year
- Inference Llama 2 in one file of pure C++☆83Updated last year
- Inference of Mamba models in pure C☆187Updated last year
- Python bindings for ggml☆140Updated 7 months ago
- asynchronous/distributed speculative evaluation for llama3☆39Updated 8 months ago
- GPT2 implementation in C++ using Ort☆26Updated 4 years ago
- ggml implementation of embedding models including SentenceTransformer and BGE☆56Updated last year
- Schola is a plugin for enabling Reinforcement Learning (RL) in Unreal Engine. It provides tools to help developers create environments, d…☆34Updated 3 weeks ago
- Experiments with BitNet inference on CPU☆53Updated last year
- GGUF parser in Python☆26Updated 8 months ago
- Train your own small bitnet model☆67Updated 6 months ago
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆42Updated 11 months ago
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆71Updated 2 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆52Updated last year
- Port of Facebook's LLaMA model in C/C++☆20Updated last year
- LLM inference in C/C++☆71Updated this week
- A faithful clone of Karpathy's llama2.c (one file inference, zero dependency) but fully functional with LLaMA 3 8B base and instruct mode…☆125Updated 9 months ago
- RWKV in nanoGPT style☆189Updated 10 months ago
- Course Project for COMP4471 on RWKV☆17Updated last year
- Web browser version of StarCoder.cpp☆44Updated last year
- AMD related optimizations for transformer models☆75Updated 5 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆86Updated this week
- Stable Diffusion in pure C/C++☆58Updated last year
- CI for ggml and related projects☆28Updated this week