nomic-ai / kompute
General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases. Backed by the Linux Foundation.
☆44Updated last month
Alternatives and similar repositories for kompute:
Users that are interested in kompute are comparing it to the libraries listed below
- llama.cpp fork used by GPT4All☆54Updated last month
- Inference of Mamba models in pure C☆187Updated last year
- Experiments with BitNet inference on CPU☆53Updated last year
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆70Updated last month
- Course Project for COMP4471 on RWKV☆17Updated last year
- GGML implementation of BERT model with Python bindings and quantization.☆56Updated last year
- ggml implementation of embedding models including SentenceTransformer and BGE☆56Updated last year
- RWKV in nanoGPT style☆188Updated 9 months ago
- LLM inference in C/C++☆67Updated last week
- instinct.cpp provides ready to use alternatives to OpenAI Assistant API and built-in utilities for developing AI Agent applications (RAG,…☆45Updated 8 months ago
- Port of Suno AI's Bark in C/C++ for fast inference☆53Updated 11 months ago
- This repository is a read-only mirror of https://gitlab.arm.com/kleidi/kleidiai☆26Updated this week
- AMD related optimizations for transformer models☆72Updated 4 months ago
- Port of Microsoft's BioGPT in C/C++ using ggml☆87Updated last year
- Python bindings for ggml☆140Updated 6 months ago
- Schola is a plugin for enabling Reinforcement Learning (RL) in Unreal Engine. It provides tools to help developers create environments, d…☆33Updated last month
- Lightweight Llama 3 8B Inference Engine in CUDA C☆47Updated last week
- Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU a…☆43Updated 6 months ago
- GPT2 implementation in C++ using Ort☆26Updated 4 years ago
- llama.cpp fork with additional SOTA quants and improved performance☆231Updated this week
- asynchronous/distributed speculative evaluation for llama3☆39Updated 7 months ago
- AirLLM 70B inference with single 4GB GPU☆12Updated 7 months ago
- Editor with LLM generation tree exploration☆65Updated last month
- Train your own small bitnet model☆65Updated 5 months ago
- Inference Llama 2 in one file of pure C++☆83Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆52Updated last year
- Estimating hardware and cloud costs of LLMs and transformer projects☆14Updated last year
- 1.58-bit LLaMa model☆81Updated 11 months ago
- cortex.llamacpp is a high-efficiency C++ inference engine for edge computing. It is a dynamic library that can be loaded by any server a…☆36Updated this week
- LLM training in simple, raw C/CUDA☆92Updated 11 months ago