Durham / RWKV-finetune-script
Script and instruction how to fine-tune large RWKV model on your data for Alpaca dataset
☆31Updated last year
Alternatives and similar repositories for RWKV-finetune-script:
Users that are interested in RWKV-finetune-script are comparing it to the libraries listed below
- ☆82Updated 10 months ago
- Fine-tuning RWKV-World model☆25Updated last year
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆10Updated last year
- 📖 — Notebooks related to RWKV☆59Updated last year
- A finetuning pipeline for instruct tuning Raven 14bn using QLORA 4bit and the Ditty finetuning library☆28Updated 9 months ago
- This project is established for real-time training of the RWKV model.☆49Updated 10 months ago
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆148Updated 7 months ago
- RWKV centralised docs for the community☆21Updated 2 weeks ago
- ☆110Updated last week
- Reinforcement Learning Toolkit for RWKV.(v6,v7,ARWKV) Distillation,SFT,RLHF(DPO,ORPO), infinite context training, Aligning. Exploring the…☆35Updated last week
- A converter and basic tester for rwkv onnx☆42Updated last year
- ☆10Updated last year
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆40Updated last year
- Framework agnostic python runtime for RWKV models☆145Updated last year
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆412Updated last year
- Easily deploy your rwkv model☆18Updated last year
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated 2 years ago
- ☆42Updated last year
- rwkv finetuning☆36Updated 11 months ago
- Enhancing LangChain prompts to work better with RWKV models☆34Updated last year
- ☆34Updated 8 months ago
- 基于RWKV模型的角色扮演,实际上是个改的妈都不认识的 RWKV_Role_Playing☆16Updated last year
- A large-scale RWKV v6, v7(World, ARWKV, PRWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy o…☆32Updated last week
- Instruct-tune LLaMA on consumer hardware☆73Updated last year
- RWKV v5,v6 LoRA Trainer on Cuda and Rocm Platform. RWKV is a RNN with transformer-level LLM performance. It can be directly trained like …☆12Updated last year
- Run ONNX RWKV-v4 models with GPU acceleration using DirectML [Windows], or just on CPU [Windows AND Linux]; Limited to 430M model at this…☆20Updated 2 years ago
- Evaluating LLMs with Dynamic Data☆78Updated last month
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆117Updated last year
- Flask server for RWKV☆10Updated last year
- Merge LLM that are split in to parts☆26Updated last year