gf712 / gpt2-cpp
GPT2 implementation in C++ using Ort
☆26Updated 4 years ago
Alternatives and similar repositories for gpt2-cpp:
Users that are interested in gpt2-cpp are comparing it to the libraries listed below
- ☆124Updated last year
- Train your own small bitnet model☆65Updated 5 months ago
- Experiments with BitNet inference on CPU☆53Updated last year
- Inference of Mamba models in pure C☆187Updated last year
- Embeddings focused small version of Llama NLP model☆103Updated last year
- RWKV in nanoGPT style☆188Updated 9 months ago
- Python bindings for ggml☆140Updated 6 months ago
- ☆60Updated 2 years ago
- LLM training in simple, raw C/CUDA☆92Updated 11 months ago
- Inference RWKV v5, v6 and v7 with Qualcomm AI Engine Direct SDK☆60Updated this week
- GGML implementation of BERT model with Python bindings and quantization.☆56Updated last year
- tinygrad port of the RWKV large language model.☆44Updated 3 weeks ago
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆264Updated 11 months ago
- High-Performance SGEMM on CUDA devices☆87Updated 2 months ago
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆310Updated last year
- Tiny C++11 GPT-2 inference implementation from scratch☆57Updated 3 months ago
- Inference RWKV with multiple supported backends.☆39Updated this week
- General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). …☆44Updated last month
- minimal C implementation of speculative decoding based on llama2.c☆20Updated 8 months ago
- Fork of llama.cpp, extended for GPT-NeoX, RWKV-v4, and Falcon models☆29Updated last year
- A converter and basic tester for rwkv onnx☆42Updated last year
- qwen2 and llama3 cpp implementation☆43Updated 9 months ago
- This repository is a read-only mirror of https://gitlab.arm.com/kleidi/kleidiai☆26Updated this week
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆87Updated this week
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆70Updated last month
- Course Project for COMP4471 on RWKV☆17Updated last year
- ☆26Updated 2 years ago
- Port of Facebook's LLaMA model in C/C++☆20Updated last year
- Inference Llama 2 in one file of pure C++☆83Updated last year
- ggml implementation of embedding models including SentenceTransformer and BGE☆56Updated last year