nomic-ai / komputeLinks
General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases. Backed by the Linux Foundation.
☆52Updated 8 months ago
Alternatives and similar repositories for kompute
Users that are interested in kompute are comparing it to the libraries listed below
Sorting:
- A minimalistic C++ Jinja templating engine for LLM chat templates☆193Updated last month
- Inference of Mamba models in pure C☆192Updated last year
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆74Updated 8 months ago
- instinct.cpp provides ready to use alternatives to OpenAI Assistant API and built-in utilities for developing AI Agent applications (RAG,…☆53Updated last year
- Port of Microsoft's BioGPT in C/C++ using ggml☆85Updated last year
- ggml implementation of embedding models including SentenceTransformer and BGE☆59Updated last year
- GGML implementation of BERT model with Python bindings and quantization.☆55Updated last year
- Python bindings for ggml☆146Updated last year
- LLM-based code completion engine☆190Updated 9 months ago
- Course Project for COMP4471 on RWKV☆17Updated last year
- Thin wrapper around GGML to make life easier☆40Updated 4 months ago
- GPT2 implementation in C++ using Ort☆26Updated 4 years ago
- Port of Suno AI's Bark in C/C++ for fast inference☆52Updated last year
- Web browser version of StarCoder.cpp☆44Updated 2 years ago
- RWKV in nanoGPT style☆193Updated last year
- asynchronous/distributed speculative evaluation for llama3☆38Updated last year
- Experiments with BitNet inference on CPU☆54Updated last year
- AMD related optimizations for transformer models☆93Updated 2 weeks ago
- Fast and vectorizable algorithms for searching in a vector of sorted floating point numbers☆152Updated 10 months ago
- A C++ port of karpathy/llm.c features a tiny torch library while maintaining overall simplicity.☆38Updated last year
- A faithful clone of Karpathy's llama2.c (one file inference, zero dependency) but fully functional with LLaMA 3 8B base and instruct mode…☆139Updated last week
- Train your own small bitnet model☆75Updated last year
- Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU a…☆42Updated last year
- llama.cpp fork used by GPT4All☆57Updated 8 months ago
- Inference Llama 2 in one file of pure C++☆84Updated 2 years ago
- Embeddings focused small version of Llama NLP model☆105Updated 2 years ago
- cortex.llamacpp is a high-efficiency C++ inference engine for edge computing. It is a dynamic library that can be loaded by any server a…☆41Updated 3 months ago
- Tensor library for machine learning☆21Updated last year
- GGUF parser in Python☆28Updated last year
- Source code for Intel's Polite Guard NLP project☆37Updated 2 months ago