harrisonvanderbyl / rwkv-cpp-accelerated
A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependencies
☆311Updated last year
Alternatives and similar repositories for rwkv-cpp-accelerated:
Users that are interested in rwkv-cpp-accelerated are comparing it to the libraries listed below
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆414Updated last year
- Framework agnostic python runtime for RWKV models☆146Updated last year
- ChatGPT-like Web UI for RWKVstic☆100Updated 2 years ago
- ☆535Updated last year
- INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model☆1,511Updated last month
- SoTA Transformers with C-backend for fast inference on your CPU.☆310Updated last year
- Implementation of the RWKV language model in pure WebGPU/Rust.☆299Updated this week
- ☆82Updated 11 months ago
- RWKV in nanoGPT style☆189Updated 10 months ago
- Python bindings for llama.cpp☆199Updated 2 years ago
- Falcon LLM ggml framework with CPU and GPU support☆246Updated last year
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆148Updated 8 months ago
- ☆543Updated 4 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- ggml implementation of BERT☆488Updated last year
- GPTQ inference Triton kernel☆299Updated last year
- Instruct-tuning LLaMA on consumer hardware☆66Updated 2 years ago
- Python bindings for ggml☆140Updated 8 months ago
- A converter and basic tester for rwkv onnx☆42Updated last year
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆687Updated 8 months ago
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.☆51Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆423Updated last year
- This project is established for real-time training of the RWKV model.☆49Updated 11 months ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆274Updated last year
- Enhancing LangChain prompts to work better with RWKV models☆34Updated last year
- A finetuning pipeline for instruct tuning Raven 14bn using QLORA 4bit and the Ditty finetuning library☆28Updated 11 months ago
- Merge Transformers language models by use of gradient parameters.☆208Updated 8 months ago
- C++ implementation for 💫StarCoder☆454Updated last year
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆112Updated last year
- Extend the original llama.cpp repo to support redpajama model.☆117Updated 8 months ago