nomic-ai / komputeLinks
General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases. Backed by the Linux Foundation.
☆48Updated 3 months ago
Alternatives and similar repositories for kompute
Users that are interested in kompute are comparing it to the libraries listed below
Sorting:
- GGML implementation of BERT model with Python bindings and quantization.☆55Updated last year
- A minimalistic C++ Jinja templating engine for LLM chat templates☆153Updated 3 weeks ago
- Experiments with BitNet inference on CPU☆55Updated last year
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆72Updated 4 months ago
- Course Project for COMP4471 on RWKV☆17Updated last year
- Port of Microsoft's BioGPT in C/C++ using ggml☆87Updated last year
- Inference of Mamba models in pure C☆187Updated last year
- Port of Suno AI's Bark in C/C++ for fast inference☆52Updated last year
- instinct.cpp provides ready to use alternatives to OpenAI Assistant API and built-in utilities for developing AI Agent applications (RAG,…☆49Updated 11 months ago
- Python bindings for ggml☆141Updated 9 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- Inference Llama/Llama2/Llama3 Modes in NumPy☆21Updated last year
- Inference RWKV v7 in pure C.☆33Updated 2 months ago
- Benchmark your GPU with ease☆19Updated last week
- llama.cpp fork used by GPT4All☆55Updated 3 months ago
- AMD related optimizations for transformer models☆77Updated 7 months ago
- Embeddings focused small version of Llama NLP model☆104Updated 2 years ago
- RWKV in nanoGPT style☆191Updated 11 months ago
- Lightweight Llama 3 8B Inference Engine in CUDA C☆47Updated 2 months ago
- tinygrad port of the RWKV large language model.☆45Updated 2 months ago
- asynchronous/distributed speculative evaluation for llama3☆39Updated 9 months ago
- ggml implementation of embedding models including SentenceTransformer and BGE☆58Updated last year
- Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU a…☆43Updated 8 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆52Updated last year
- Web browser version of StarCoder.cpp☆45Updated last year
- Editor with LLM generation tree exploration☆67Updated 3 months ago
- Testing LLM reasoning abilities with family relationship quizzes.☆62Updated 4 months ago
- Stable Diffusion in pure C/C++☆58Updated last year
- 1.58-bit LLaMa model☆81Updated last year
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆41Updated last year