SynthiaDL / TrainChatGalRWKV
☆43Updated 8 months ago
Related projects ⓘ
Alternatives and complementary repositories for TrainChatGalRWKV
- This project is established for real-time training of the RWKV model.☆50Updated 6 months ago
- ☆84Updated last week
- 用户友好、开箱即用的 RWKV Prompts 示例,适用于所有用户。Awesome RWKV Prompts for general users, more user-friendly, ready-to-use prompt examples.☆30Updated 3 months ago
- 基于RWKV模型的角色扮演,实际上是个改的妈都不认识的 RWKV_Role_Playing☆16Updated last year
- ☆81Updated 6 months ago
- 更简单的微调,提供便捷脚本,微调说明☆31Updated last month
- ☆72Updated last year
- 一个基于Flask实现的RWKV_Role_Playing项目的API。☆30Updated 4 months ago
- 实现Blip2RWKV+QFormer的多模态图文对话大模型,使用Two-Step Cognitive Psychology Prompt方法,仅3B参数的模型便能够出现类人因果思维链。对标MiniGPT-4,ImageBind等图文对话大语言模型,力求以更小的算力和资源实…☆37Updated last year
- ☆12Updated 3 months ago
- rwkv finetuning☆36Updated 6 months ago
- 使用Gradio制作的基于RWKV的角色扮演的webui☆229Updated last week
- RAG SYSTEM FOR RWKV☆36Updated last week
- A QQ Chatbot based on RWKV (W.I.P.)☆78Updated last year
- 一个简单的,由ChatGPT主导编写的api,使用简单的请求访问ChatRWKV☆15Updated last year
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆133Updated 3 months ago
- ✅4g GPU可用 | 简易实现ChatGLM单机调用多个计算设备(GPU、CPU)进行推理☆34Updated last year
- Fine-tuning RWKV-World model☆25Updated last year
- 二次元角色中文语料库☆37Updated last year
- 直观的大模型应用软件: 机体☆145Updated this week
- Fastllm-based chatbot☆11Updated last year
- AI agent for short story writing☆43Updated 10 months ago
- 植物花卉数据集[PlantFlower Datasets]基于RWKV大模型RWKV World模型数据集☆10Updated last year
- Reinforcement Learning Toolkit for RWKV. Distillation,SFT,RLHF(DPO,ORPO), infinite context training, Aligning Let's boost the model's int…☆19Updated this week
- Combining chatglm, vits, and pycqhttp for local deployment of qq chatbots.结合chatglm,vits,pycqhttp的本地部署qq聊天机器人。☆39Updated last year
- ☆21Updated last year
- 基于中文文本情绪分析自动切换参考音频的 GPT-SoVITS 推理 Demo☆77Updated 8 months ago
- RWKV centralised docs for the community☆19Updated 2 months ago
- AI吟美零式☆63Updated 7 months ago
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆407Updated last year