Const-me / CgmlLinks
GPU-targeted vendor-agnostic AI library for Windows, and Mistral model implementation.
☆58Updated last year
Alternatives and similar repositories for Cgml
Users that are interested in Cgml are comparing it to the libraries listed below
Sorting:
- A fork of llama3.c used to do some R&D on inferencing☆22Updated 6 months ago
- A library for incremental loading of large PyTorch checkpoints☆56Updated 2 years ago
- A playground to make it easy to try crazy things☆33Updated last week
- A CLI to manage install and configure llama inference implemenation in multiple languages☆67Updated last year
- A JPEG Image Compression Service using Part Homomorphic Encryption.☆31Updated 3 months ago
- GGML implementation of BERT model with Python bindings and quantization.☆55Updated last year
- Tiny Dream - An embedded, Header Only, Stable Diffusion C++ implementation☆262Updated last year
- A copy of ONNX models, datasets, and code all in one GitHub repository. Follow the README to learn more.☆105Updated last year
- Heirarchical Navigable Small Worlds☆97Updated 2 months ago
- Port of Suno AI's Bark in C/C++ for fast inference☆52Updated last year
- Experiments with BitNet inference on CPU☆54Updated last year
- A web-app to explore topics using LLM (less typing and more clicks)☆68Updated last year
- A fork of OpenBLAS with Armv8-A SVE (Scalable Vector Extension) support☆17Updated 5 years ago
- Code sample showing how to run and benchmark models on Qualcomm's Window PCs☆99Updated 8 months ago
- A GPU Accelerated Binary Vector Store☆47Updated 4 months ago
- C++ raytracer that supports custom models. Supports running the calculations on the CPU using C++11 threads or in the GPU via CUDA.☆75Updated 2 years ago
- A Modern C++ Wrapper for TensorFlow☆50Updated last month
- Local LLM inference & management server with built-in OpenAI API☆31Updated last year
- Pytorch script hot swap: Change code without unloading your LLM from VRAM☆126Updated 2 months ago
- LLaVA server (llama.cpp).☆180Updated last year
- An implementation of bucketMul LLM inference☆217Updated 11 months ago
- throwaway GPT inference☆140Updated last year
- Agent Based Model on GPU using CUDA 12.2.1 and OpenGL 4.5 (CUDA OpenGL interop) on Windows/Linux☆72Updated 3 months ago
- Richard is gaining power☆189Updated this week
- ☆25Updated last year
- ☆57Updated 10 months ago
- Inference Llama/Llama2/Llama3 Modes in NumPy☆21Updated last year
- ☆187Updated 9 months ago
- ☆163Updated last year
- Editor with LLM generation tree exploration☆68Updated 4 months ago