harrisonvanderbyl / rwkv-cpp-acceleratedLinks
A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependencies
☆313Updated 2 years ago
Alternatives and similar repositories for rwkv-cpp-accelerated
Users that are interested in rwkv-cpp-accelerated are comparing it to the libraries listed below
Sorting:
- Framework agnostic python runtime for RWKV models☆147Updated 2 years ago
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆412Updated 2 years ago
- SoTA Transformers with C-backend for fast inference on your CPU.☆311Updated 2 years ago
- RWKV in nanoGPT style☆197Updated last year
- Python bindings for ggml☆147Updated last year
- ChatGPT-like Web UI for RWKVstic☆100Updated 2 years ago
- ☆535Updated 2 years ago
- Python bindings for llama.cpp☆199Updated 2 years ago
- INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model☆1,563Updated 10 months ago
- ggml implementation of BERT☆498Updated last year
- A converter and basic tester for rwkv onnx☆43Updated 2 years ago
- LLM-based code completion engine☆189Updated last year
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆148Updated last year
- Falcon LLM ggml framework with CPU and GPU support☆249Updated 2 years ago
- This project is established for real-time training of the RWKV model.☆50Updated last year
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.☆52Updated 2 years ago
- tinygrad port of the RWKV large language model.☆45Updated 10 months ago
- Implementation of the RWKV language model in pure WebGPU/Rust.☆338Updated 3 weeks ago
- fastLLaMa: An experimental high-performance framework for running Decoder-only LLMs with 4-bit quantization in Python using a C/C++ backe…☆412Updated 2 years ago
- ☆80Updated last year
- Inference of Mamba and Mamba2 models in pure C☆196Updated last week
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆280Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆124Updated 2 years ago
- A finetuning pipeline for instruct tuning Raven 14bn using QLORA 4bit and the Ditty finetuning library☆28Updated last year
- ☆169Updated 3 weeks ago
- Enhancing LangChain prompts to work better with RWKV models☆34Updated 2 years ago
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆71Updated 2 years ago
- CLIP inference in plain C/C++ with no extra dependencies☆549Updated 7 months ago
- ☆552Updated last year
- A lightweight, hackable, and efficient framework for training and fine-tuning language models☆187Updated this week