wangyi-fudan / wyGPTLinks
Wang Yi's GPT solution
☆142Updated last year
Alternatives and similar repositories for wyGPT
Users that are interested in wyGPT are comparing it to the libraries listed below
Sorting:
- throwaway GPT inference☆140Updated last year
- Richard is gaining power☆192Updated 3 weeks ago
- Absolute minimalistic implementation of a GPT-like transformer using only numpy (<650 lines).☆252Updated last year
- Heirarchical Navigable Small Worlds☆97Updated 3 months ago
- 100k real ( +100k random ) galaxies from a sector. Visualized with Raylib.☆87Updated 3 months ago
- A CLI to manage install and configure llama inference implemenation in multiple languages☆67Updated last year
- ☆196Updated 2 months ago
- ☆252Updated 2 years ago
- A graphics engine that executes entirely on the CPU☆224Updated last year
- Agent Based Model on GPU using CUDA 12.2.1 and OpenGL 4.5 (CUDA OpenGL interop) on Windows/Linux☆74Updated 4 months ago
- C++ raytracer that supports custom models. Supports running the calculations on the CPU using C++11 threads or in the GPU via CUDA.☆75Updated 2 years ago
- Port of MiniGPT4 in C++ (4bit, 5bit, 6bit, 8bit, 16bit CPU inference with GGML)☆567Updated last year
- This repo contains a new way to use bloom filters to do lossless video compression☆245Updated last month
- a graphical C/C++ runtime editor☆186Updated last year
- A floating point arithmetic which works with types of any mantissa, exponent or base in modern header-only C++.☆82Updated 8 months ago
- A BERT that you can train on a (gaming) laptop.☆209Updated last year
- minimal yet working VPN daemon for Linux☆106Updated 2 months ago
- ☆248Updated last year
- Implement recursion using English as the programming language and an LLM as the runtime.☆238Updated 2 years ago
- ☆222Updated 6 months ago
- Tensor library & inference framework for machine learning☆101Updated last week
- Improved statistical classifier for immune repertoires☆175Updated 2 years ago
- DiscoGrad - automatically differentiate across conditional branches in C++ programs☆203Updated 10 months ago
- Run and explore Llama models locally with minimal dependencies on CPU☆191Updated 9 months ago
- Revealing example of self-attention, the building block of transformer AI models☆131Updated 2 years ago
- Little Kitten Webserver☆276Updated last year
- Docker-based inference engine for AMD GPUs☆231Updated 9 months ago
- +256,000,000 points per plot, +60 Fps on shity laptop. Only limit is the size of your RAM.☆153Updated last week
- ☆123Updated last month
- Algebraic enhancements for GEMM & AI accelerators☆277Updated 4 months ago