jiamingkong / rwkv_reward
Training a reward model for RLHF using RWKV.
☆14Updated last year
Alternatives and similar repositories for rwkv_reward:
Users that are interested in rwkv_reward are comparing it to the libraries listed below
- Trying to deconstruct RWKV in understandable terms☆14Updated last year
- JAX implementations of RWKV☆19Updated last year
- Enhancing LangChain prompts to work better with RWKV models☆34Updated last year
- RWKV models and examples powered by candle.☆18Updated 3 weeks ago
- Course Project for COMP4471 on RWKV☆17Updated last year
- Run ONNX RWKV-v4 models with GPU acceleration using DirectML [Windows], or just on CPU [Windows AND Linux]; Limited to 430M model at this…☆20Updated 2 years ago
- A converter and basic tester for rwkv onnx☆42Updated last year
- This project is established for real-time training of the RWKV model.☆49Updated 10 months ago
- A highly customizable, full scale web backend for web-rwkv, built on axum with websocket protocol.☆26Updated 11 months ago
- Easily deploy your rwkv model☆18Updated last year
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆40Updated last year
- Fine-tuning RWKV-World model☆25Updated last year
- ☆13Updated last year
- BlinkDL's RWKV-v4 running in the browser☆47Updated 2 years ago
- ☆82Updated 10 months ago
- ChatGPT-like Web UI for RWKVstic☆19Updated last year
- Reinforcement Learning Toolkit for RWKV.(v6,v7,ARWKV) Distillation,SFT,RLHF(DPO,ORPO), infinite context training, Aligning. Exploring the…☆35Updated last week
- ☆40Updated last year
- ☆11Updated last year
- Chatbot that answers frequently asked questions in French, English, and Tunisian using the Rasa NLU framework and RWKV-4-Raven☆13Updated last year
- Script and instruction how to fine-tune large RWKV model on your data for Alpaca dataset☆31Updated last year
- Interpretability analysis of language model outlier and attempts to distill the model☆13Updated last year
- RWKV centralised docs for the community☆21Updated 2 weeks ago
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆10Updated last year
- GGML implementation of BERT model with Python bindings and quantization.☆56Updated last year
- ☆42Updated last year
- tinygrad port of the RWKV large language model.☆44Updated 2 weeks ago