open-compass / BotChatLinks
Evaluating LLMs' multi-round chatting capability via assessing conversations generated by two LLM instances.
☆155Updated 4 months ago
Alternatives and similar repositories for BotChat
Users that are interested in BotChat are comparing it to the libraries listed below
Sorting:
- Generative Judge for Evaluating Alignment☆246Updated last year
- ☆147Updated last year
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆275Updated 2 years ago
- ☆306Updated last year
- [ACL2024] T-Eval: Evaluating Tool Utilization Capability of Large Language Models Step by Step☆290Updated last year
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆190Updated last year
- A large-scale, fine-grained, diverse preference dataset (and models).☆353Updated last year
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆139Updated 5 months ago
- ☆327Updated last year
- [EMNLP 2023 Demo] "CLEVA: Chinese Language Models EVAluation Platform"☆62Updated 4 months ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆256Updated 9 months ago
- Counting-Stars (★)☆83Updated 4 months ago
- [COLING 2025] ToolEyes: Fine-Grained Evaluation for Tool Learning Capabilities of Large Language Models in Real-world Scenarios☆69Updated 4 months ago
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition (NeurIPS 2024 Datasets and Benchmarks Track)☆95Updated 7 months ago
- Data and code for paper "M3Exam: A Multilingual, Multimodal, Multilevel Benchmark for Examining Large Language Models"☆102Updated 2 years ago
- ☆140Updated 2 years ago
- LongQLoRA: Extent Context Length of LLMs Efficiently☆166Updated last year
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆571Updated 10 months ago
- ☆127Updated 2 years ago
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆396Updated 3 months ago
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆250Updated 11 months ago
- A self-ailgnment method for role-play. Benchmark for role-play. Resources for "Large Language Models are Superpositions of All Characters…☆203Updated last year
- FireAct: Toward Language Agent Fine-tuning☆281Updated last year
- ☆231Updated last year
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆135Updated last year
- ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels …☆278Updated 2 years ago
- ☆96Updated 2 years ago
- ☆49Updated last year
- Unofficial implementation of AlpaGasus☆93Updated 2 years ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆178Updated 3 months ago