Const-me / Cgml
GPU-targeted vendor-agnostic AI library for Windows, and Mistral model implementation.
☆54Updated last year
Alternatives and similar repositories for Cgml:
Users that are interested in Cgml are comparing it to the libraries listed below
- A library for incremental loading of large PyTorch checkpoints☆56Updated 2 years ago
- A fork of llama3.c used to do some R&D on inferencing☆19Updated 3 months ago
- Experiments with BitNet inference on CPU☆53Updated 11 months ago
- A Javascript library (with Typescript types) to parse metadata of GGML based GGUF files.☆47Updated 7 months ago
- A playground to make it easy to try crazy things☆33Updated last week
- A CLI to manage install and configure llama inference implemenation in multiple languages☆65Updated last year
- C++ raytracer that supports custom models. Supports running the calculations on the CPU using C++11 threads or in the GPU via CUDA.☆75Updated 2 years ago
- A web-app to explore topics using LLM (less typing and more clicks)☆66Updated last year
- An implementation of bucketMul LLM inference☆215Updated 8 months ago
- Local LLM inference & management server with built-in OpenAI API☆31Updated 11 months ago
- Tiny Dream - An embedded, Header Only, Stable Diffusion C++ implementation☆257Updated last year
- GGML implementation of BERT model with Python bindings and quantization.☆56Updated last year
- A copy of ONNX models, datasets, and code all in one GitHub repository. Follow the README to learn more.☆104Updated last year
- Code sample showing how to run and benchmark models on Qualcomm's Window PCs☆95Updated 5 months ago
- Port of Suno AI's Bark in C/C++ for fast inference☆53Updated 11 months ago
- Richard is gaining power☆184Updated 3 months ago
- WebGPU LLM inference tuned by hand☆149Updated last year
- throwaway GPT inference☆140Updated 9 months ago
- ☆163Updated 9 months ago
- Tiny inference-only implementation of LLaMA☆92Updated 11 months ago
- ☆53Updated 7 months ago
- Agent Based Model on GPU using CUDA 12.2.1 and OpenGL 4.5 (CUDA OpenGL interop) on Windows/Linux☆70Updated 2 weeks ago
- A simple library for working with Hugging Face models.☆14Updated 2 months ago
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization, with PyTorch/CUDA☆36Updated last year
- Editor with LLM generation tree exploration☆65Updated last month
- Image Generation API Server - Similar to https://text-generator.io but for images☆50Updated 3 months ago
- 33B Chinese LLM, DPO QLORA, 100K context, AirLLM 70B inference with single 4GB GPU☆13Updated 10 months ago
- LLaVA server (llama.cpp).☆178Updated last year
- Lightweight Llama 3 8B Inference Engine in CUDA C☆47Updated 2 weeks ago