Const-me / CgmlLinks
GPU-targeted vendor-agnostic AI library for Windows, and Mistral model implementation.
☆58Updated last year
Alternatives and similar repositories for Cgml
Users that are interested in Cgml are comparing it to the libraries listed below
Sorting:
- A library for incremental loading of large PyTorch checkpoints☆56Updated 2 years ago
- Code sample showing how to run and benchmark models on Qualcomm's Window PCs☆102Updated last year
- A JPEG Image Compression Service using Part Homomorphic Encryption.☆31Updated 7 months ago
- A playground to make it easy to try crazy things☆33Updated 2 weeks ago
- Richard is gaining power☆197Updated 4 months ago
- throwaway GPT inference☆140Updated last year
- ☆189Updated last year
- A GPU Accelerated Binary Vector Store☆47Updated 8 months ago
- ☆198Updated 5 months ago
- Wang Yi's GPT solution☆142Updated last year
- Algebraic enhancements for GEMM & AI accelerators☆281Updated 8 months ago
- A copy of ONNX models, datasets, and code all in one GitHub repository. Follow the README to learn more.☆104Updated last year
- C++ raytracer that supports custom models. Supports running the calculations on the CPU using C++11 threads or in the GPU via CUDA.☆74Updated 2 years ago
- ☆163Updated last year
- Mistral7B playing DOOM☆138Updated last year
- Revealing example of self-attention, the building block of transformer AI models☆130Updated 2 years ago
- An implementation of bucketMul LLM inference☆223Updated last year
- A CLI to manage install and configure llama inference implemenation in multiple languages☆65Updated last year
- LLaVA server (llama.cpp).☆183Updated 2 years ago
- Absolute minimalistic implementation of a GPT-like transformer using only numpy (<650 lines).☆254Updated last year
- A graphics engine that executes entirely on the CPU☆223Updated last year
- Heirarchical Navigable Small Worlds☆101Updated 2 months ago
- Advanced Python Function Debugging with MCP Integration.☆57Updated 4 months ago
- Docker-based inference engine for AMD GPUs☆230Updated last year
- ☆62Updated last year
- Tiny Dream - An embedded, Header Only, Stable Diffusion C++ implementation☆263Updated 2 years ago
- Agent Based Model on GPU using CUDA 12.2.1 and OpenGL 4.5 (CUDA OpenGL interop) on Windows/Linux☆75Updated 7 months ago
- WebGPU LLM inference tuned by hand☆150Updated 2 years ago
- Run and explore Llama models locally with minimal dependencies on CPU☆189Updated last year
- Lightweight Llama 3 8B Inference Engine in CUDA C☆48Updated 7 months ago