Const-me / CgmlLinks
GPU-targeted vendor-agnostic AI library for Windows, and Mistral model implementation.
☆58Updated last year
Alternatives and similar repositories for Cgml
Users that are interested in Cgml are comparing it to the libraries listed below
Sorting:
- A library for incremental loading of large PyTorch checkpoints☆56Updated 2 years ago
- C++ raytracer that supports custom models. Supports running the calculations on the CPU using C++11 threads or in the GPU via CUDA.☆75Updated 2 years ago
- ☆188Updated 10 months ago
- A fork of llama3.c used to do some R&D on inferencing☆22Updated 6 months ago
- Richard is gaining power☆192Updated 3 weeks ago
- A playground to make it easy to try crazy things☆33Updated last month
- A JPEG Image Compression Service using Part Homomorphic Encryption.☆31Updated 4 months ago
- Experiments with BitNet inference on CPU☆54Updated last year
- Wang Yi's GPT solution☆142Updated last year
- Port of Suno AI's Bark in C/C++ for fast inference☆52Updated last year
- ☆196Updated 2 months ago
- A GPU Accelerated Binary Vector Store☆47Updated 5 months ago
- ☆163Updated last year
- A CLI to manage install and configure llama inference implemenation in multiple languages☆67Updated last year
- LLaVA server (llama.cpp).☆180Updated last year
- Tiny Dream - An embedded, Header Only, Stable Diffusion C++ implementation☆263Updated last year
- Heirarchical Navigable Small Worlds☆97Updated 3 months ago
- throwaway GPT inference☆140Updated last year
- Mistral7B playing DOOM☆132Updated last year
- Code sample showing how to run and benchmark models on Qualcomm's Window PCs☆100Updated 9 months ago
- Algebraic enhancements for GEMM & AI accelerators☆277Updated 4 months ago
- Official implementation of "WhisperNER: Unified Open Named Entity and Speech Recognition"☆194Updated 4 months ago
- Revealing example of self-attention, the building block of transformer AI models☆131Updated 2 years ago
- A graphics engine that executes entirely on the CPU☆224Updated last year
- ☆149Updated 2 weeks ago
- Lightweight Llama 3 8B Inference Engine in CUDA C☆47Updated 3 months ago
- Absolute minimalistic implementation of a GPT-like transformer using only numpy (<650 lines).☆253Updated last year
- A hockey shootout game with a custom game engine developed on Windows and released on Android☆53Updated 2 years ago
- A copy of ONNX models, datasets, and code all in one GitHub repository. Follow the README to learn more.☆105Updated last year
- Editor with LLM generation tree exploration☆71Updated 5 months ago