nomic-ai / kompute
General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases. Backed by the Linux Foundation.
☆43Updated 3 months ago
Alternatives and similar repositories for kompute:
Users that are interested in kompute are comparing it to the libraries listed below
- llama.cpp fork with additional SOTA quants and improved performance☆126Updated this week
- Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU a…☆42Updated 3 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆88Updated this week
- GGML implementation of BERT model with Python bindings and quantization.☆52Updated 11 months ago
- A minimalistic C++ Jinja templating engine for LLM chat templates☆96Updated this week
- Experiments with BitNet inference on CPU☆52Updated 9 months ago
- Course Project for COMP4471 on RWKV☆16Updated 11 months ago
- AMD related optimizations for transformer models☆63Updated 2 months ago
- Inference of Mamba models in pure C☆183Updated 10 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆51Updated last year
- Python bindings for ggml☆136Updated 4 months ago
- Nomic Vulkan Fork of LLaMa.cpp☆51Updated 3 weeks ago
- cortex.llamacpp is a high-efficiency C++ inference engine for edge computing. It is a dynamic library that can be loaded by any server a…☆32Updated this week
- Data preparation code for Amber 7B LLM☆84Updated 8 months ago
- 1.58-bit LLaMa model☆80Updated 9 months ago
- tinygrad port of the RWKV large language model.☆44Updated 7 months ago
- GPT2 implementation in C++ using Ort☆25Updated 3 years ago
- Stable Diffusion in pure C/C++☆60Updated last year
- Port of Suno AI's Bark in C/C++ for fast inference☆55Updated 9 months ago
- Lightweight Llama 3 8B Inference Engine in CUDA C☆42Updated last week
- ggml implementation of embedding models including SentenceTransformer and BGE☆54Updated last year
- instinct.cpp provides ready to use alternatives to OpenAI Assistant API and built-in utilities for developing AI Agent applications (RAG,…☆42Updated 6 months ago
- AirLLM 70B inference with single 4GB GPU☆12Updated 5 months ago
- RWKV in nanoGPT style☆184Updated 7 months ago
- Train your own small bitnet model☆64Updated 2 months ago
- 👩🤝🤖 A curated list of datasets for large language models (LLMs), RLHF and related resources (continually updated)☆22Updated last year
- Port of Microsoft's BioGPT in C/C++ using ggml☆88Updated 10 months ago
- llama.cpp to PyTorch Converter☆27Updated 9 months ago
- LLM as Interpreter for Natural Language Programming, Pseudo-code Programming and Flow Programming of AI Agents☆33Updated 5 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated 7 months ago