YFHuangxxxx / CBBQLinks
☆28Updated 2 years ago
Alternatives and similar repositories for CBBQ
Users that are interested in CBBQ are comparing it to the libraries listed below
Sorting:
- ☆84Updated 9 months ago
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆135Updated last year
- This repo is for the paper: On the Safety of Conversational Models: Taxonomy, Dataset, and Benchmark☆25Updated 3 years ago
- ☆26Updated 2 years ago
- Code and data for the paper "Can Large Language Models Understand Real-World Complex Instructions?"(AAAI2024)☆49Updated last year
- EMNLP'2023: Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration☆36Updated last year
- ☆14Updated last year
- ☆75Updated last year
- A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic…☆386Updated 6 months ago
- [ACL 2024] FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models☆114Updated 4 months ago
- Personality Alignment of Language Models☆47Updated 3 months ago
- Collection of papers for scalable automated alignment.☆93Updated 11 months ago
- Code and Results of the Paper: On the Reliability of Psychological Scales on Large Language Models☆30Updated last year
- ☆47Updated last year
- Source Code of Paper "GPTScore: Evaluate as You Desire"☆257Updated 2 years ago
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆160Updated last year
- ☆56Updated last year
- NLPCC-2025 Shared-Task 1: LLM-Generated Text Detection☆16Updated 4 months ago
- Do Large Language Models Know What They Don’t Know?☆99Updated 11 months ago
- Repository for the Bias Benchmark for QA dataset.☆128Updated last year
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆138Updated last year
- Flames is a highly adversarial benchmark in Chinese for LLM's harmlessness evaluation developed by Shanghai AI Lab and Fudan NLP Group.☆60Updated last year
- ☆30Updated last year
- Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"☆58Updated 3 years ago
- ☆20Updated last year
- ☆33Updated last year
- OMGEval😮: An Open Multilingual Generative Evaluation Benchmark for Foundation Models☆35Updated last year
- self-adaptive in-context learning☆45Updated 2 years ago
- The source code of paper "CHEF: A Pilot Chinese Dataset for Evidence-Based Fact-Checking"☆79Updated 2 years ago
- SeqXGPT: An advance method for sentence-level AI-generated text detection.☆93Updated last year