nomic-ai / komputeLinks
General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases. Backed by the Linux Foundation.
☆51Updated 4 months ago
Alternatives and similar repositories for kompute
Users that are interested in kompute are comparing it to the libraries listed below
Sorting:
- ☆59Updated last year
- A minimalistic C++ Jinja templating engine for LLM chat templates☆160Updated last week
- GGML implementation of BERT model with Python bindings and quantization.☆56Updated last year
- Port of Microsoft's BioGPT in C/C++ using ggml☆87Updated last year
- Port of Suno AI's Bark in C/C++ for fast inference☆52Updated last year
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆72Updated 5 months ago
- Inference of Mamba models in pure C☆188Updated last year
- llama.cpp fork used by GPT4All☆56Updated 4 months ago
- Web browser version of StarCoder.cpp☆45Updated last year
- Python bindings for ggml☆142Updated 10 months ago
- ggml implementation of embedding models including SentenceTransformer and BGE☆58Updated last year
- Experiments with BitNet inference on CPU☆54Updated last year
- Course Project for COMP4471 on RWKV☆17Updated last year
- asynchronous/distributed speculative evaluation for llama3☆39Updated 11 months ago
- A faithful clone of Karpathy's llama2.c (one file inference, zero dependency) but fully functional with LLaMA 3 8B base and instruct mode…☆128Updated 11 months ago
- A C++ port of karpathy/llm.c features a tiny torch library while maintaining overall simplicity.☆34Updated 11 months ago
- Thin wrapper around GGML to make life easier☆36Updated 3 weeks ago
- RWKV in nanoGPT style☆191Updated last year
- Embeddings focused small version of Llama NLP model☆103Updated 2 years ago
- Inference Llama/Llama2/Llama3 Modes in NumPy☆21Updated last year
- LLM-based code completion engine☆193Updated 5 months ago
- Onboarding documentation source for the AMD Ryzen™ AI Software Platform. The AMD Ryzen™ AI Software Platform enables developers to take…☆68Updated this week
- Tensor library for machine learning☆21Updated last year
- Train your own small bitnet model☆74Updated 8 months ago
- No-code CLI designed for accelerating ONNX workflows☆201Updated last month
- AMD related optimizations for transformer models☆80Updated 3 weeks ago
- Port of Meta's Encodec in C/C++☆226Updated 7 months ago
- instinct.cpp provides ready to use alternatives to OpenAI Assistant API and built-in utilities for developing AI Agent applications (RAG,…☆52Updated last year
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆87Updated this week
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆287Updated last year