YFHuangxxxx / CBBQLinks
☆28Updated 2 years ago
Alternatives and similar repositories for CBBQ
Users that are interested in CBBQ are comparing it to the libraries listed below
Sorting:
- ☆81Updated 8 months ago
- This repo is for the paper: On the Safety of Conversational Models: Taxonomy, Dataset, and Benchmark☆26Updated 3 years ago
- EMNLP'2023: Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration☆36Updated last year
- Code and Results of the Paper: On the Reliability of Psychological Scales on Large Language Models☆30Updated 11 months ago
- The source code of paper "CHEF: A Pilot Chinese Dataset for Evidence-Based Fact-Checking"☆78Updated 2 years ago
- ☆14Updated last year
- A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic…☆376Updated 4 months ago
- NLPCC-2025 Shared-Task 1: LLM-Generated Text Detection☆15Updated 3 months ago
- ☆15Updated 2 years ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆134Updated 11 months ago
- Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"☆58Updated 3 years ago
- Do Large Language Models Know What They Don’t Know?☆99Updated 9 months ago
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆154Updated last year
- ☆47Updated last year
- Source Code of Paper "GPTScore: Evaluate as You Desire"☆255Updated 2 years ago
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆132Updated last year
- ☆75Updated last year
- [ACL 2024] Unveiling Linguistic Regions in Large Language Models☆31Updated last year
- The repository for the survey paper <<Survey on Large Language Models Factuality: Knowledge, Retrieval and Domain-Specificity>>☆341Updated last year
- ☆29Updated last year
- Collection of papers for scalable automated alignment.☆93Updated 10 months ago
- Repository for the Bias Benchmark for QA dataset.☆127Updated last year
- Flames is a highly adversarial benchmark in Chinese for LLM's harmlessness evaluation developed by Shanghai AI Lab and Fudan NLP Group.☆59Updated last year
- Recent papers on (1) Psychology of LLMs; (2) Biases in LLMs.☆49Updated last year
- ☆27Updated 2 years ago
- This repository contains the data and code introduced in the paper "CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Maske…☆123Updated last year
- OMGEval😮: An Open Multilingual Generative Evaluation Benchmark for Foundation Models☆35Updated last year
- Official Implementation of "Probing Language Models for Pre-training Data Detection"☆19Updated 8 months ago
- ☆56Updated last year
- Resource, Evaluation and Detection Papers for ChatGPT☆459Updated last year