vissurra / RolePlayGLM
基于ChatGLM-6B,低成本实现类Instruction效果的角色扮演
☆44Updated last year
Alternatives and similar repositories for RolePlayGLM:
Users that are interested in RolePlayGLM are comparing it to the libraries listed below
- Just for debug☆56Updated last year
- 使用甄嬛语料微调的chatglm☆85Updated 2 years ago
- 实现Blip2RWKV+QFormer的多模态图文对话大模型,使用Two-Step Cognitive Psychology Prompt方法,仅3B参数的模型便能够出现类人因果思维链。对标MiniGPT-4,ImageBind等图文对话大语言模型,力求以更小的算力和资源实…☆38Updated last year
- 斯坦福工作 Generative Agents的复现和翻译 An attempt to build a working, locally-running cheap version of Generative Agents: Interactive Simulacra of…☆84Updated last year
- The Silk Magic Book will record the Magic Prompts on some very Large LLMs. The Silk Magic Book belongs to the project Luotuo(骆驼), which c…☆56Updated last year
- 骆驼大乱斗: Massive Game Content Generated by LLM☆19Updated last year
- The plan which extend ChatHaruhi into Zero-shot Roleplaying model☆104Updated last year
- 浅尝LLM☆33Updated last year
- 一个基于Flask实现的RWKV_Role_Playing项目的API。☆30Updated 9 months ago
- ☆73Updated 2 years ago
- 实现一种多Lora权值集成切换+Zero-Finetune零微调增强的跨模型技术方案,LLM-Base+LLM-X+Alpaca,初期,LLM-Base为Chatglm6B底座模型,LLM-X是LLAMA增强模型。该方案简易高效,目标是使此类语言模型能够低能耗广泛部署,并最…☆116Updated last year
- Humanable Chat Generative-model Fine-tuning | LLM微调☆206Updated last year
- 使用Gradio制作的基于RWKV的角色扮演的webui☆240Updated last month
- 全球首个StableVicuna中文优化版。☆64Updated last year
- Kanchil(鼷鹿)是世界上最小的偶蹄目动物,这个开源项目意在探索小模型(6B以下)是否也能具备和人类偏好对齐的能力。☆113Updated 2 years ago
- 这是一个一键让小参数大模型进行角色扮演的项目,从数据构成和训练都包含在这项目中☆23Updated last year
- deep learning☆149Updated last month
- zero零训练llm调参☆31Updated last year
- 首个llama2 13b 中文版 模型 (Base + 中文对话SFT,实现流畅多轮人机自然语言交互)☆90Updated last year
- A dataset template for guiding chat-models to self-cognition, including information about the model’s identity, capabilities, usage, limi…☆27Updated last year
- ✅4g GPU可用 | 简易实现ChatGLM单机调用多个计算设备(GPU、CPU)进行推理☆34Updated 2 years ago
- 骆驼QA,中文大语言阅读理解模型。☆74Updated last year
- 使用langchain进行任务规划,构建子任务的会话场景资源,通过MCTS任务执行器,来让每个子任务通过在上下文中资源,通过自身反思探索来获取自身对问题的最优答案;这种方式依赖模型的对齐偏好,我们在每种偏好上设计了一个工程框架,来完成自我对不同答案的奖励进行采样策略☆29Updated last week
- share data, prompt data , pretraining data☆36Updated last year
- A simulation of world using GPTs. (depreciated)☆158Updated last year
- ChatGLM2-6B微调, SFT/LoRA, instruction finetune☆107Updated last year
- 文本语料转训练集工具,txt转dataset☆91Updated 11 months ago
- AI Emoji Argue Agent 🚀 基于LangChain的开源表情包斗图Agent☆25Updated 10 months ago
- chatglm-6b微调/LORA/PPO/推理, 样本为自动生成的整数/小数加减乘除运算, 可gpu/cpu☆164Updated last year
- AI "Mafia" is coming! As night falls,9 ChatGPT AI players each harbor their own sinister motives. Let's see who will have the last laugh.…☆34Updated last year