a1k0n / a1gptLinks
throwaway GPT inference
☆141Updated last year
Alternatives and similar repositories for a1gpt
Users that are interested in a1gpt are comparing it to the libraries listed below
Sorting:
- ☆250Updated last year
- Richard is gaining power☆199Updated 6 months ago
- Port of MiniGPT4 in C++ (4bit, 5bit, 6bit, 8bit, 16bit CPU inference with GGML)☆568Updated 2 years ago
- Absolute minimalistic implementation of a GPT-like transformer using only numpy (<650 lines).☆254Updated 2 years ago
- GGUF implementation in C as a library and a tools CLI program☆297Updated 4 months ago
- Algebraic enhancements for GEMM & AI accelerators☆286Updated 10 months ago
- A BERT that you can train on a (gaming) laptop.☆210Updated 2 years ago
- Wang Yi's GPT solution☆142Updated 2 years ago
- ☆255Updated 2 years ago
- Multi-Threaded FP32 Matrix Multiplication on x86 CPUs☆374Updated 8 months ago
- Visualizing the internal board state of a GPT trained on chess PGN strings, and performing interventions on its internal board state and …☆218Updated last year
- Tensor library & inference framework for machine learning☆118Updated 3 months ago
- Autograd to GPT-2 completely from scratch☆125Updated 5 months ago
- An implementation of bucketMul LLM inference☆223Updated last year
- C++ raytracer that supports custom models. Supports running the calculations on the CPU using C++11 threads or in the GPU via CUDA.☆74Updated 3 years ago
- WebGPU LLM inference tuned by hand☆151Updated 2 years ago
- Heirarchical Navigable Small Worlds☆101Updated 5 months ago
- a small code base for training large models☆318Updated 8 months ago
- Tiny Dream - An embedded, Header Only, Stable Diffusion C++ implementation☆267Updated 2 years ago
- Inference of Mamba models in pure C☆196Updated last year
- Pytorch script hot swap: Change code without unloading your LLM from VRAM☆125Updated 8 months ago
- Mistral7B playing DOOM☆138Updated last year
- DiscoGrad - automatically differentiate across conditional branches in C++ programs☆209Updated last year
- ☆190Updated last year
- ☆296Updated last year
- SoTA Transformers with C-backend for fast inference on your CPU.☆311Updated 2 years ago
- Hashed Lookup Table based Matrix Multiplication (halutmatmul) - Stella Nera accelerator☆215Updated 2 years ago
- A fork of llama3.c used to do some R&D on inferencing☆22Updated last year
- Visualize the intermediate output of Mistral 7B☆382Updated 11 months ago
- A floating point arithmetic which works with types of any mantissa, exponent or base in modern header-only C++.☆83Updated last year