gf712 / gpt2-cppLinks
GPT2 implementation in C++ using Ort
☆26Updated 4 years ago
Alternatives and similar repositories for gpt2-cpp
Users that are interested in gpt2-cpp are comparing it to the libraries listed below
Sorting:
- General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). …☆52Updated 9 months ago
- Python bindings for ggml☆146Updated last year
- Inference Llama 2 in one file of pure C++☆86Updated 2 years ago
- Experiments with BitNet inference on CPU☆54Updated last year
- Inference of Mamba models in pure C☆194Updated last year
- LLM training in simple, raw C/CUDA☆108Updated last year
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆313Updated last year
- RWKV in nanoGPT style☆195Updated last year
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆73Updated 10 months ago
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆302Updated last year
- ☆125Updated 2 years ago
- A minimalistic C++ Jinja templating engine for LLM chat templates☆200Updated 2 months ago
- Fork of llama.cpp, extended for GPT-NeoX, RWKV-v4, and Falcon models☆28Updated 2 years ago
- Course Project for COMP4471 on RWKV☆17Updated last year
- ☆70Updated 2 years ago
- A C++ port of karpathy/llm.c features a tiny torch library while maintaining overall simplicity.☆40Updated last year
- Inference Llama/Llama2/Llama3 Modes in NumPy☆21Updated 2 years ago
- tinygrad port of the RWKV large language model.☆45Updated 9 months ago
- minimal C implementation of speculative decoding based on llama2.c☆26Updated last year
- GGUF parser in Python☆28Updated last year
- The CUDA version of the RWKV language model ( https://github.com/BlinkDL/RWKV-LM )☆227Updated this week
- High-Performance SGEMM on CUDA devices☆113Updated 10 months ago
- asynchronous/distributed speculative evaluation for llama3☆39Updated last year
- GGML implementation of BERT model with Python bindings and quantization.☆58Updated last year
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆94Updated this week
- ggml implementation of embedding models including SentenceTransformer and BGE☆63Updated last year
- Embeddings focused small version of Llama NLP model☆107Updated 2 years ago
- Tiny Dream - An embedded, Header Only, Stable Diffusion C++ implementation☆266Updated 2 years ago
- Inference RWKV v7 in pure C.☆42Updated 2 months ago
- A converter and basic tester for rwkv onnx☆43Updated last year