ms-KuroNeko / RWKV-DramaLinks
基于RWKV模型的角色扮演,实际上是个改的妈都不认识的 RWKV_Role_Playing
☆17Updated 2 years ago
Alternatives and similar repositories for RWKV-Drama
Users that are interested in RWKV-Drama are comparing it to the libraries listed below
Sorting:
- This project is established for real-time training of the RWKV model.☆49Updated last year
- ☆41Updated last year
- ☆13Updated 10 months ago
- ☆81Updated last year
- Fine-tuning RWKV-World model☆26Updated 2 years ago
- rwkv finetuning☆37Updated last year
- Reinforcement Learning Toolkit for RWKV.(v6,v7,ARWKV) Distillation,SFT,RLHF(DPO,ORPO), infinite context training, Aligning. Exploring the…☆54Updated last month
- ☆150Updated last week
- 用户友好、开箱即用的 RWKV Prompts 示例,适用于所有用户。Awesome RWKV Prompts for general users, more user-friendly, ready-to-use prompt examples.☆36Updated 9 months ago
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆147Updated last year
- 一个基于Flask实现的RWKV_Role_Playing项目的API。☆31Updated last year
- 实现Blip2RWKV+QFormer的多模态图文对话大模型,使用Two-Step Cognitive Psychology Prompt方法,仅3B参数的模型便能够出现类人因果思维链。对标MiniGPT-4,ImageBind等图文对话大语言模型,力求以更小的算力和资源实…☆40Updated 2 years ago
- ☆17Updated 10 months ago
- A converter and basic tester for rwkv onnx☆42Updated last year
- ☆22Updated 2 years ago
- ChatGPT-like Web UI for RWKVstic☆19Updated 2 years ago
- A QQ Chatbot based on RWKV (W.I.P.)☆79Updated last year
- ☆34Updated last year
- ☆13Updated 2 years ago
- ☆39Updated 6 months ago
- Our data munging code.☆33Updated 3 weeks ago
- Instruct-tune LLaMA on consumer hardware☆72Updated 2 years ago
- Modified Beam Search with periodical restart☆12Updated last year
- Thinker provides personalized advice using GPT based on your unique context.☆26Updated 2 months ago
- Enhancing LangChain prompts to work better with RWKV models☆34Updated 2 years ago
- AI "Mafia" is coming! As night falls,9 ChatGPT AI players each harbor their own sinister motives. Let's see who will have the last laugh.…☆36Updated 2 years ago
- A finetuning pipeline for instruct tuning Raven 14bn using QLORA 4bit and the Ditty finetuning library☆28Updated last year
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆10Updated 2 years ago
- RAG SYSTEM FOR RWKV☆51Updated 10 months ago
- Direct Preference Optimization for RWKV, aiming for RWKV-5 and 6.☆11Updated last year