boson-ai / RPBench-Auto
An automated pipeline for evaluating LLMs for role-playing.
☆161Updated 5 months ago
Alternatives and similar repositories for RPBench-Auto:
Users that are interested in RPBench-Auto are comparing it to the libraries listed below
- The related works and background techniques about Openai o1☆216Updated 2 months ago
- ☆164Updated last year
- ☆223Updated 3 months ago
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆164Updated 3 weeks ago
- Offical Repo for "Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale"☆226Updated 3 weeks ago
- A repository sharing the literatures about long-context large language models, including the methodologies and the evaluation benchmarks☆259Updated 7 months ago
- Awesome papers for role-playing with language models☆170Updated 4 months ago
- ☆103Updated last month
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆243Updated last year
- Repo for Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent☆252Updated last month
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆246Updated 6 months ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆241Updated 2 months ago
- [ACL2024] T-Eval: Evaluating Tool Utilization Capability of Large Language Models Step by Step☆261Updated 11 months ago
- ☆262Updated 7 months ago
- ☆494Updated 2 months ago
- ☆141Updated 8 months ago
- Evaluating LLMs' multi-round chatting capability via assessing conversations generated by two LLM instances.☆145Updated last year
- A series of technical report on Slow Thinking with LLM☆536Updated this week
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆40Updated last year
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆342Updated 6 months ago
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆233Updated 4 months ago
- ☆212Updated 10 months ago
- ☆104Updated 4 months ago
- Real-time updated, fine-grained reading list on LLM-synthetic-data.🔥☆233Updated last month
- code for Scaling Laws of RoPE-based Extrapolation☆70Updated last year
- ☆326Updated last month