jiamingkong / rwkv_rewardLinks
Training a reward model for RLHF using RWKV.
☆15Updated 2 years ago
Alternatives and similar repositories for rwkv_reward
Users that are interested in rwkv_reward are comparing it to the libraries listed below
Sorting:
- BlinkDL's RWKV-v4 running in the browser☆47Updated 2 years ago
- ☆82Updated last year
- This project is established for real-time training of the RWKV model.☆50Updated last year
- JAX implementations of RWKV☆19Updated last year
- Trying to deconstruct RWKV in understandable terms☆14Updated 2 years ago
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆148Updated 11 months ago
- RWKV centralised docs for the community☆28Updated 3 weeks ago
- An unsupervised model merging algorithm for Transformers-based language models.☆106Updated last year
- ☆38Updated 3 months ago
- A converter and basic tester for rwkv onnx☆42Updated last year
- Merge Transformers language models by use of gradient parameters.☆206Updated 11 months ago
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆40Updated 2 years ago
- ☆76Updated last year
- Fine-tuning RWKV-World model☆25Updated 2 years ago
- An all-new Language Model That Processes Ultra-Long Sequences of 100,000+ Ultra-Fast☆151Updated 11 months ago
- ☆40Updated 2 years ago
- GPT-2 small trained on phi-like data☆67Updated last year
- ToK aka Tree of Knowledge for Large Language Models LLM. It's a novel dataset that inspires knowledge symbolic correlation in simple inpu…☆53Updated 2 years ago
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated 2 years ago
- Interpretability analysis of language model outlier and attempts to distill the model☆13Updated 2 years ago
- ☆13Updated 2 years ago
- Tune MPTs☆84Updated 2 years ago
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆103Updated 2 years ago
- Course Project for COMP4471 on RWKV☆17Updated last year
- ☆139Updated last month
- Run ONNX RWKV-v4 models with GPU acceleration using DirectML [Windows], or just on CPU [Windows AND Linux]; Limited to 430M model at this…☆21Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- Evaluating LLMs with Dynamic Data☆91Updated 2 weeks ago
- Patch for MPT-7B which allows using and training a LoRA☆58Updated 2 years ago
- Framework agnostic python runtime for RWKV models☆146Updated last year