gf712 / gpt2-cppLinks
GPT2 implementation in C++ using Ort
☆26Updated 4 years ago
Alternatives and similar repositories for gpt2-cpp
Users that are interested in gpt2-cpp are comparing it to the libraries listed below
Sorting:
- Inference Llama 2 in one file of pure C++☆83Updated 2 years ago
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆314Updated last year
- RWKV in nanoGPT style☆193Updated last year
- Inference of Mamba models in pure C☆191Updated last year
- LLM training in simple, raw C/CUDA☆104Updated last year
- General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). …☆52Updated 7 months ago
- Python bindings for ggml☆146Updated last year
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆294Updated last year
- Experiments with BitNet inference on CPU☆54Updated last year
- A C++ port of karpathy/llm.c features a tiny torch library while maintaining overall simplicity.☆36Updated last year
- A faithful clone of Karpathy's llama2.c (one file inference, zero dependency) but fully functional with LLaMA 3 8B base and instruct mode…☆138Updated last year
- Fork of llama.cpp, extended for GPT-NeoX, RWKV-v4, and Falcon models☆28Updated 2 years ago
- ☆68Updated 2 years ago
- ☆125Updated last year
- Inference RWKV v7 in pure C.☆38Updated 3 weeks ago
- Embeddings focused small version of Llama NLP model☆104Updated 2 years ago
- asynchronous/distributed speculative evaluation for llama3☆39Updated last year
- Port of Meta's Encodec in C/C++☆226Updated 9 months ago
- GGUF parser in Python☆28Updated last year
- Inference Llama/Llama2/Llama3 Modes in NumPy☆21Updated last year
- GGML implementation of BERT model with Python bindings and quantization.☆56Updated last year
- High-Performance SGEMM on CUDA devices☆101Updated 8 months ago
- tinygrad port of the RWKV large language model.☆45Updated 6 months ago
- A converter and basic tester for rwkv onnx☆43Updated last year
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆73Updated 7 months ago
- Framework agnostic python runtime for RWKV models☆146Updated 2 years ago
- ggml implementation of embedding models including SentenceTransformer and BGE☆59Updated last year
- Train your own small bitnet model☆75Updated 11 months ago
- llama.cpp to PyTorch Converter☆34Updated last year
- Deep Learning Primitives and Mini-Framework for OpenCL☆200Updated last year