LC1332 / Zero-HaruhiLinks
The plan which extend ChatHaruhi into Zero-shot Roleplaying model
☆108Updated last year
Alternatives and similar repositories for Zero-Haruhi
Users that are interested in Zero-Haruhi are comparing it to the libraries listed below
Sorting:
- [EMNLP'24] CharacterGLM: Customizing Chinese Conversational AI Characters with Large Language Models☆475Updated 7 months ago
- Just for debug☆56Updated last year
- RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models☆503Updated 10 months ago
- 使用langchain进行任务规划,构建子任务的会话场景资源,通过MCTS任务执行器,来让每个子任务通过在上下文中资源,通过自身反思探索来获取自身对问题的最优答案;这种方式依赖模型的对齐偏好,我们在每种偏好上设计了一个工程框架,来完成自我对不同答案的奖励进行采样策略☆29Updated last month
- 骆驼大乱斗: Massive Game Content Generated by LLM☆19Updated last year
- The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.☆444Updated 10 months ago
- ☆235Updated last year
- Imitate OpenAI with Local Models☆89Updated 11 months ago
- 使用甄嬛语料微调的chatglm☆86Updated 2 years ago
- Generate multi-round conversation roleplay data based on self-instruct and evol-instruct.☆134Updated 7 months ago
- 360zhinao☆291Updated 3 months ago
- ☆260Updated 2 months ago
- 首个llama2 13b 中文版模型 (Base + 中文对话SFT,实现流畅多轮人机自然语言交互)☆91Updated 2 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆136Updated 8 months ago
- Alpaca Chinese Dataset -- 中文指令微调数据集☆213Updated 10 months ago
- ☆237Updated 6 months ago
- 用于大模型 RLHF 进行人工数据标注排序的工具。A tool for manual response data annotation sorting in RLHF stage.☆253Updated 2 years ago
- 用于汇总目 前的开源中文对话数据集☆173Updated 2 years ago
- ☆151Updated last year
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆141Updated last year
- 从小说中提取对话数据集☆235Updated last year
- A Multi-modal RAG Project with Dataset from Honor of Kings, one of the most popular smart phone games in China☆67Updated 11 months ago
- 多轮共情对话模型PICA☆97Updated last year
- Llama2开源模型中文版-全方位测评,基于SuperCLUE的OPEN基准 | Llama2 Chinese evaluation with SuperCLUE☆127Updated 2 years ago
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆264Updated last year
- ☆231Updated last year
- Its an open source LLM based on MOE Structure.☆58Updated last year
- qwen models finetuning☆103Updated 5 months ago
- 旨在对当前主流LLM进行一个直观、具体、标准的评测☆95Updated 2 years ago
- 基于《西游记》原文、白话文、ChatGPT生成数据制作的,以InternLM2微调的角色扮演多LLM聊天室。 本项目将介绍关于角色扮演类 LLM 的一切,从数据获取、数据处理,到使用 XTuner 微调并部署至 OpenXLab,再到使用 LMDeploy 部署,以 op…☆103Updated last year