harrisonvanderbyl / rwkv-cpp-acceleratedLinks
A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependencies
☆314Updated last year
Alternatives and similar repositories for rwkv-cpp-accelerated
Users that are interested in rwkv-cpp-accelerated are comparing it to the libraries listed below
Sorting:
- Framework agnostic python runtime for RWKV models☆146Updated 2 years ago
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆411Updated 2 years ago
- SoTA Transformers with C-backend for fast inference on your CPU.☆307Updated last year
- INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model☆1,547Updated 6 months ago
- Implementation of the RWKV language model in pure WebGPU/Rust.☆315Updated 3 weeks ago
- ChatGPT-like Web UI for RWKVstic☆100Updated 2 years ago
- RWKV in nanoGPT style☆192Updated last year
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.☆52Updated 2 years ago
- ggml implementation of BERT☆493Updated last year
- ☆535Updated last year
- Python bindings for ggml☆146Updated last year
- Python bindings for llama.cpp☆198Updated 2 years ago
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆148Updated last year
- tinygrad port of the RWKV large language model.☆45Updated 6 months ago
- Falcon LLM ggml framework with CPU and GPU support☆248Updated last year
- A converter and basic tester for rwkv onnx☆43Updated last year
- fastLLaMa: An experimental high-performance framework for running Decoder-only LLMs with 4-bit quantization in Python using a C/C++ backe…☆412Updated 2 years ago
- ☆81Updated last year
- LLM-based code completion engine☆191Updated 8 months ago
- Embeddings focused small version of Llama NLP model☆104Updated 2 years ago
- This project is established for real-time training of the RWKV model.☆49Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆124Updated 2 years ago
- Inference of Mamba models in pure C☆192Updated last year
- Enhancing LangChain prompts to work better with RWKV models☆34Updated 2 years ago
- CLIP inference in plain C/C++ with no extra dependencies☆522Updated 3 months ago
- Inference code for facebook LLaMA models with Wrapyfi support☆129Updated 2 years ago
- A finetuning pipeline for instruct tuning Raven 14bn using QLORA 4bit and the Ditty finetuning library☆29Updated last year
- ☆40Updated 2 years ago
- Simple, hackable and fast implementation for training/finetuning medium-sized LLaMA-based models☆181Updated last month
- C++ implementation for BLOOM☆808Updated 2 years ago