open-compass / BotChatLinks
Evaluating LLMs' multi-round chatting capability via assessing conversations generated by two LLM instances.
☆160Updated 7 months ago
Alternatives and similar repositories for BotChat
Users that are interested in BotChat are comparing it to the libraries listed below
Sorting:
- Generative Judge for Evaluating Alignment☆249Updated 2 years ago
- [ACL2024] T-Eval: Evaluating Tool Utilization Capability of Large Language Models Step by Step☆302Updated last year
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆193Updated last year
- ☆320Updated last year
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆284Updated 2 years ago
- ☆333Updated last year
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆258Updated last year
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆137Updated 8 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆580Updated last year
- [COLING 2025] ToolEyes: Fine-Grained Evaluation for Tool Learning Capabilities of Large Language Models in Real-world Scenarios☆73Updated 8 months ago
- ☆147Updated last year
- [EMNLP 2023 Demo] "CLEVA: Chinese Language Models EVAluation Platform"☆63Updated 8 months ago
- A large-scale, fine-grained, diverse preference dataset (and models).☆360Updated 2 years ago
- FireAct: Toward Language Agent Fine-tuning☆291Updated 2 years ago
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆253Updated last year
- Data and code for paper "M3Exam: A Multilingual, Multimodal, Multilevel Benchmark for Examining Large Language Models"☆103Updated 2 years ago
- Unofficial implementation of AlpaGasus☆94Updated 2 years ago
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆194Updated last year
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition (NeurIPS 2024 Datasets and Benchmarks Track)☆99Updated 11 months ago
- ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels …☆284Updated 2 years ago
- A self-ailgnment method for role-play. Benchmark for role-play. Resources for "Large Language Models are Superpositions of All Characters…☆210Updated last year
- ☆235Updated last year
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆412Updated 6 months ago
- ☆143Updated 2 years ago
- ☆320Updated last year
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆269Updated last year
- A repository sharing the literatures about long-context large language models, including the methodologies and the evaluation benchmarks☆272Updated last year
- ☆129Updated 2 years ago
- SuperCLUE-Agent: 基于中文原生任务的Agent智能体核心能力测评基准☆94Updated 2 years ago
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆264Updated 6 months ago