☆148Jul 1, 2024Updated last year
Alternatives and similar repositories for CritiqueLLM
Users that are interested in CritiqueLLM are comparing it to the libraries listed below
Sorting:
- 大模型多维度中文对齐评测基准 (ACL 2024)☆421Oct 25, 2025Updated 4 months ago
- Generative Judge for Evaluating Alignment☆250Jan 18, 2024Updated 2 years ago
- ☆148Apr 16, 2024Updated last year
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆589Dec 9, 2024Updated last year
- Evaluate the Quality of Critique☆36Jun 1, 2024Updated last year
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆270Sep 12, 2024Updated last year
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆416Jun 25, 2025Updated 8 months ago
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆285Aug 20, 2023Updated 2 years ago
- ☆60Aug 22, 2024Updated last year
- A self-ailgnment method for role-play. Benchmark for role-play. Resources for "Large Language Models are Superpositions of All Characters…☆211May 28, 2024Updated last year
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Feb 22, 2024Updated 2 years ago
- ☆83Apr 18, 2024Updated last year
- 中文大语言模型评测第一期☆113Oct 23, 2023Updated 2 years ago
- ☆325Jul 25, 2024Updated last year
- Easy Data Augmentation for NLP on Chinese☆16Aug 3, 2019Updated 6 years ago
- ☆185Nov 13, 2023Updated 2 years ago
- Understanding the correlation between different LLM benchmarks☆29Jan 11, 2024Updated 2 years ago
- ☆10Mar 18, 2024Updated last year
- ☆922May 22, 2024Updated last year
- A simple implementation of ReasonGenRM.☆19Apr 21, 2025Updated 10 months ago
- [ICLR 2025 Spotlight] An open-sourced LLM judge for evaluating LLM-generated answers.☆420Feb 11, 2025Updated last year
- ☆37May 7, 2023Updated 2 years ago
- A large-scale, fine-grained, diverse preference dataset (and models).☆364Dec 29, 2023Updated 2 years ago
- Chinese safety prompts for evaluating and improving the safety of LLMs. 中文安全prompts,用于评估和提升大模型的安全性。☆1,132Feb 27, 2024Updated 2 years ago
- [ACL 2024 Findings] MathBench: A Comprehensive Multi-Level Difficulty Mathematics Evaluation Dataset☆111May 22, 2025Updated 9 months ago
- ☆109Jul 15, 2025Updated 7 months ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆96Aug 20, 2024Updated last year
- [ACL 2024] FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models☆119Jun 12, 2025Updated 8 months ago
- The official repository for our EMNLP 2024 paper, Themis: A Reference-free NLG Evaluation Language Model with Flexibility and Interpretab…☆20Feb 23, 2025Updated last year
- 记录有用的Git repos☆12Jul 28, 2024Updated last year
- An implementation of MSSRM method☆11Mar 23, 2023Updated 2 years ago
- Secrets of RLHF in Large Language Models Part I: PPO☆1,416Mar 3, 2024Updated 2 years ago
- LongBench v2 and LongBench (ACL 25'&24')☆1,101Jan 15, 2025Updated last year
- Yuan 2.0 Large Language Model☆689Jul 11, 2024Updated last year
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition (NeurIPS 2024 Datasets and Benchmarks Track)☆102Feb 20, 2025Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆137Jul 8, 2024Updated last year
- ☆64Apr 9, 2024Updated last year
- Xwin-LM: Powerful, Stable, and Reproducible LLM Alignment☆1,036May 31, 2024Updated last year
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆136Jun 5, 2024Updated last year