ZeldaHuang / rwkv-cpp-server
Easily deploy your rwkv model
☆18Updated last year
Related projects ⓘ
Alternatives and complementary repositories for rwkv-cpp-server
- Trying to deconstruct RWKV in understandable terms☆14Updated last year
- Enhancing LangChain prompts to work better with RWKV models☆34Updated last year
- A converter and basic tester for rwkv onnx☆41Updated 9 months ago
- This project is established for real-time training of the RWKV model.☆50Updated 5 months ago
- 📖 — Notebooks related to RWKV☆59Updated last year
- ☆13Updated last year
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆10Updated last year
- Fine-tuning RWKV-World model☆25Updated last year
- tinygrad port of the RWKV large language model.☆43Updated 4 months ago
- ☆81Updated 5 months ago
- Script and instruction how to fine-tune large RWKV model on your data for Alpaca dataset☆31Updated last year
- 基于RWKV模型的角色扮演,实际上是个改的妈都不认识的 RWKV_Role_Playing☆16Updated last year
- Chatbot that answers frequently asked questions in French, English, and Tunisian using the Rasa NLU framework and RWKV-4-Raven☆13Updated last year
- Framework agnostic python runtime for RWKV models☆145Updated last year
- ☆42Updated last year
- Training a reward model for RLHF using RWKV.☆14Updated last year
- rwkv finetuning☆35Updated 6 months ago
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆307Updated 9 months ago
- ☆40Updated last year
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.☆50Updated last year
- Easy to deploy your LLM(large language model) server with no public address GPU machine.☆14Updated 6 months ago
- Run ONNX RWKV-v4 models with GPU acceleration using DirectML [Windows], or just on CPU [Windows AND Linux]; Limited to 430M model at this…☆20Updated last year
- ☆82Updated this week
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆133Updated 2 months ago
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated last year
- BlinkDL's RWKV-v4 running in the browser☆47Updated last year
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees" adapted for Llama models☆36Updated last year
- A finetuning pipeline for instruct tuning Raven 14bn using QLORA 4bit and the Ditty finetuning library☆28Updated 5 months ago
- 用户友好、开箱即用的 RWKV Prompts 示例,适用于所有用户。Awesome RWKV Prompts for general users, more user-friendly, ready-to-use prompt examples.☆29Updated 3 months ago
- ☆21Updated last year