jiamingkong / rwkv_rewardLinks
Training a reward model for RLHF using RWKV.
☆15Updated 2 years ago
Alternatives and similar repositories for rwkv_reward
Users that are interested in rwkv_reward are comparing it to the libraries listed below
Sorting:
- JAX implementations of RWKV☆19Updated 2 years ago
- This project is established for real-time training of the RWKV model.☆49Updated last year
- Trying to deconstruct RWKV in understandable terms☆14Updated 2 years ago
- BlinkDL's RWKV-v4 running in the browser☆46Updated 2 years ago
- ☆81Updated last year
- A converter and basic tester for rwkv onnx☆42Updated last year
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆146Updated last year
- Enhancing LangChain prompts to work better with RWKV models☆34Updated 2 years ago
- ☆78Updated last year
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆40Updated 2 years ago
- Lightweight continuous batching OpenAI compatibility using HuggingFace Transformers include T5 and Whisper.☆29Updated 8 months ago
- ☆40Updated 2 years ago
- RWKV centralised docs for the community☆29Updated 3 months ago
- RWKV, in easy to read code☆72Updated 7 months ago
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆39Updated 2 years ago
- Train your own small bitnet model☆74Updated last year
- ☆39Updated 6 months ago
- Framework agnostic python runtime for RWKV models☆146Updated 2 years ago
- Easily deploy your rwkv model☆18Updated 2 years ago
- An unsupervised model merging algorithm for Transformers-based language models.☆106Updated last year
- ToK aka Tree of Knowledge for Large Language Models LLM. It's a novel dataset that inspires knowledge symbolic correlation in simple inpu…☆54Updated 2 years ago
- ☆13Updated 2 years ago
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆10Updated 2 years ago
- Course Project for COMP4471 on RWKV☆17Updated last year
- Fine-tuning RWKV-World model☆26Updated 2 years ago
- ☆153Updated 2 weeks ago
- Run ONNX RWKV-v4 models with GPU acceleration using DirectML [Windows], or just on CPU [Windows AND Linux]; Limited to 430M model at this…☆21Updated 2 years ago
- Small and Efficient Mathematical Reasoning LLMs☆72Updated last year
- Interpretability analysis of language model outlier and attempts to distill the model☆13Updated 2 years ago
- RWKV in nanoGPT style☆195Updated last year