boson-ai / RPBench-AutoLinks
An automated pipeline for evaluating LLMs for role-playing.
☆198Updated 11 months ago
Alternatives and similar repositories for RPBench-Auto
Users that are interested in RPBench-Auto are comparing it to the libraries listed below
Sorting:
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆298Updated this week
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆246Updated 6 months ago
- ☆198Updated 4 months ago
- [ACL2024] T-Eval: Evaluating Tool Utilization Capability of Large Language Models Step by Step☆286Updated last year
- ☆197Updated last week
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆190Updated 5 months ago
- Official Code for "Coser: Coordinating LLM-Based Persona Simulation of Established Roles"☆126Updated 2 months ago
- ☆305Updated last year
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆258Updated last month
- ☆161Updated 4 months ago
- ☆158Updated 7 months ago
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆245Updated 4 months ago
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆269Updated 2 years ago
- a-m-team's exploration in large language modeling☆184Updated 3 months ago
- ☆175Updated last year
- ☆109Updated last year
- ☆260Updated 3 months ago
- The related works and background techniques about Openai o1☆224Updated 7 months ago
- OpenSeek aims to unite the global open source community to drive collaborative innovation in algorithms, data and systems to develop next…☆222Updated this week
- A repository sharing the literatures about long-context large language models, including the methodologies and the evaluation benchmarks☆265Updated last year
- ☆546Updated 7 months ago
- A Comprehensive Survey on Long Context Language Modeling☆180Updated last month
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆253Updated 8 months ago
- Pre-trained, Scalable, High-performance Reward Models via Policy Discriminative Learning.☆150Updated last month
- Scaling Deep Research via Reinforcement Learning in Real-world Environments.☆568Updated 4 months ago
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆247Updated 10 months ago
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆42Updated last year
- ☆739Updated this week
- A self-ailgnment method for role-play. Benchmark for role-play. Resources for "Large Language Models are Superpositions of All Characters…☆203Updated last year
- A flexible and efficient training framework for large-scale alignment tasks☆415Updated this week