jiamingkong / rwkv_rewardLinks
Training a reward model for RLHF using RWKV.
☆15Updated 2 years ago
Alternatives and similar repositories for rwkv_reward
Users that are interested in rwkv_reward are comparing it to the libraries listed below
Sorting:
- BlinkDL's RWKV-v4 running in the browser☆47Updated 2 years ago
- This project is established for real-time training of the RWKV model.☆49Updated last year
- Trying to deconstruct RWKV in understandable terms☆14Updated 2 years ago
- JAX implementations of RWKV☆19Updated 2 years ago
- ☆81Updated last year
- ToK aka Tree of Knowledge for Large Language Models LLM. It's a novel dataset that inspires knowledge symbolic correlation in simple inpu…☆54Updated 2 years ago
- Fine-tuning RWKV-World model☆26Updated 2 years ago
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆148Updated last year
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆40Updated 2 years ago
- ☆40Updated 2 years ago
- A converter and basic tester for rwkv onnx☆43Updated last year
- Interpretability analysis of language model outlier and attempts to distill the model☆13Updated 2 years ago
- Evaluating LLMs with Dynamic Data☆95Updated 2 months ago
- Enhancing LangChain prompts to work better with RWKV models☆34Updated 2 years ago
- Framework agnostic python runtime for RWKV models☆146Updated 2 years ago
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated 2 years ago
- ☆13Updated 2 years ago
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆10Updated last year
- Easily deploy your rwkv model☆19Updated 2 years ago
- ☆147Updated last month
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- TaCo: Enhancing Cross-Lingual Transfer for Low-Resource Languages in LLMs through Translation-Assisted Chain-of-Thought Processes☆12Updated 3 months ago
- GoldFinch and other hybrid transformer components☆45Updated last year
- ☆38Updated 5 months ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated 2 years ago
- GoldFinch and other hybrid transformer components☆12Updated 2 weeks ago
- Lightweight continuous batching OpenAI compatibility using HuggingFace Transformers include T5 and Whisper.☆28Updated 6 months ago
- RWKV-7 mini☆11Updated 6 months ago
- Instruct-tuning LLaMA on consumer hardware☆66Updated 2 years ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year