morecry / CharacterEvalLinks
☆240Updated this week
Alternatives and similar repositories for CharacterEval
Users that are interested in CharacterEval are comparing it to the libraries listed below
Sorting:
- RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models☆497Updated 7 months ago
- Awesome papers for role-playing with language models☆188Updated 6 months ago
- A self-ailgnment method for role-play. Benchmark for role-play. Resources for "Large Language Models are Superpositions of All Characters…☆194Updated last year
- ☆169Updated last year
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆368Updated 8 months ago
- 大模型多维度中文对齐评测基准 (ACL 2024)☆389Updated 9 months ago
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆128Updated 11 months ago
- ☆162Updated 2 years ago
- RoleInteract: Evaluating the Social Interaction of Role-Playing Agents☆55Updated 7 months ago
- A Chinese Open-Domain Dialogue System☆321Updated last year
- ☆63Updated 2 years ago
- Generate multi-round conversation roleplay data based on self-instruct and evol-instruct.☆127Updated 4 months ago
- ☆221Updated last year
- A Bilingual Role Evaluation Benchmark for Large Language Models☆40Updated last year
- SuperCLUE-Agent: 基于中文原生任务的Agent智能体核心能力测评基准☆88Updated last year
- ☆97Updated last year
- Just for debug☆56Updated last year
- Naive Bayes-based Context Extension☆325Updated 5 months ago
- ☆142Updated 11 months ago
- ☆320Updated 11 months ago
- 用于大模型 RLHF 进行人工数据标注排序的 工具。A tool for manual response data annotation sorting in RLHF stage.☆251Updated last year
- ☆128Updated 2 years ago
- Evaluating LLMs' multi-round chatting capability via assessing conversations generated by two LLM instances.☆151Updated last week
- 中文 Instruction tuning datasets☆131Updated last year
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆115Updated last year
- ☆280Updated last year
- A Massive Multi-Level Multi-Subject Knowledge Evaluation benchmark☆101Updated last year
- ☆228Updated last year
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆87Updated last year
- ☆97Updated last year