nomic-ai / komputeLinks
General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases. Backed by the Linux Foundation.
☆52Updated 9 months ago
Alternatives and similar repositories for kompute
Users that are interested in kompute are comparing it to the libraries listed below
Sorting:
- A minimalistic C++ Jinja templating engine for LLM chat templates☆200Updated 2 months ago
- Port of Microsoft's BioGPT in C/C++ using ggml☆85Updated last year
- GPT2 implementation in C++ using Ort☆26Updated 4 years ago
- GGML implementation of BERT model with Python bindings and quantization.☆58Updated last year
- Inference of Mamba models in pure C☆194Updated last year
- ggml implementation of embedding models including SentenceTransformer and BGE☆63Updated last year
- Python bindings for ggml☆146Updated last year
- instinct.cpp provides ready to use alternatives to OpenAI Assistant API and built-in utilities for developing AI Agent applications (RAG,…☆54Updated last year
- Embeddings focused small version of Llama NLP model☆107Updated 2 years ago
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆73Updated 10 months ago
- Port of Suno AI's Bark in C/C++ for fast inference☆52Updated last year
- Course Project for COMP4471 on RWKV☆17Updated last year
- Web browser version of StarCoder.cpp☆45Updated 2 years ago
- Thin wrapper around GGML to make life easier☆40Updated last month
- Experiments with BitNet inference on CPU☆54Updated last year
- RWKV in nanoGPT style☆196Updated last year
- A faithful clone of Karpathy's llama2.c (one file inference, zero dependency) but fully functional with LLaMA 3 8B base and instruct mode…☆141Updated last month
- llama.cpp fork used by GPT4All☆55Updated 9 months ago
- ☆64Updated last year
- AirLLM 70B inference with single 4GB GPU☆14Updated 5 months ago
- TTS support with GGML☆197Updated 2 months ago
- asynchronous/distributed speculative evaluation for llama3☆39Updated last year
- Transformer GPU VRAM estimator☆67Updated last year
- cortex.llamacpp is a high-efficiency C++ inference engine for edge computing. It is a dynamic library that can be loaded by any server a…☆41Updated 5 months ago
- Fast and vectorizable algorithms for searching in a vector of sorted floating point numbers☆153Updated 11 months ago
- Train your own small bitnet model☆75Updated last year
- LLM-based code completion engine☆190Updated 10 months ago
- GGUF parser in Python☆28Updated last year
- llama.cpp to PyTorch Converter☆34Updated last year
- Fork of llama.cpp, extended for GPT-NeoX, RWKV-v4, and Falcon models☆28Updated 2 years ago