gf712 / gpt2-cpp
GPT2 implementation in C++ using Ort
☆26Updated 4 years ago
Alternatives and similar repositories for gpt2-cpp:
Users that are interested in gpt2-cpp are comparing it to the libraries listed below
- Experiments with BitNet inference on CPU☆53Updated 10 months ago
- LLM training in simple, raw C/CUDA☆91Updated 9 months ago
- GGML implementation of BERT model with Python bindings and quantization.☆53Updated last year
- Inference Llama 2 in one file of pure C++☆81Updated last year
- Fork of llama.cpp, extended for GPT-NeoX, RWKV-v4, and Falcon models☆29Updated last year
- Inference of Mamba models in pure C☆183Updated 11 months ago
- ☆124Updated last year
- Tiny C++11 GPT-2 inference implementation from scratch☆55Updated last month
- General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). …☆45Updated 4 months ago
- RWKV in nanoGPT style☆187Updated 8 months ago
- minimal C implementation of speculative decoding based on llama2.c☆18Updated 7 months ago
- Inference Llama 2 in one file of pure C & one file with CUDA☆21Updated last year
- tinygrad port of the RWKV large language model.☆44Updated 8 months ago
- Train your own small bitnet model☆64Updated 4 months ago
- Course Project for COMP4471 on RWKV☆17Updated last year
- Port of Suno AI's Bark in C/C++ for fast inference☆55Updated 10 months ago
- Inference Llama 2 in one file of pure Cuda☆17Updated last year
- Explore training for quantized models☆15Updated last month
- High-Performance SGEMM on CUDA devices☆76Updated last month
- A converter and basic tester for rwkv onnx☆42Updated last year
- Python bindings for ggml☆137Updated 5 months ago
- Inference RWKV with multiple supported backends.☆33Updated this week
- llama.cpp fork with additional SOTA quants and improved performance☆155Updated this week
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆309Updated last year
- ☆57Updated last year
- GGUF parser in Python☆26Updated 6 months ago
- A finetuning pipeline for instruct tuning Raven 14bn using QLORA 4bit and the Ditty finetuning library☆28Updated 8 months ago
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆61Updated last year
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆255Updated 10 months ago