YFHuangxxxx / CBBQLinks
☆26Updated last year
Alternatives and similar repositories for CBBQ
Users that are interested in CBBQ are comparing it to the libraries listed below
Sorting:
- ☆27Updated 2 years ago
- This repo is for the paper: On the Safety of Conversational Models: Taxonomy, Dataset, and Benchmark☆25Updated 2 years ago
- ☆74Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆65Updated last year
- ☆54Updated 10 months ago
- ☆75Updated 6 months ago
- NLPCC-2025 Shared-Task 1: LLM-Generated Text Detection☆14Updated last month
- ☆15Updated 2 weeks ago
- EMNLP'2023: Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration☆36Updated last year
- Implementation of "ACL'24: When Do LLMs Need Retrieval Augmentation? Mitigating LLMs’ Overconfidence Helps Retrieval Augmentation"☆25Updated 11 months ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆125Updated 9 months ago
- ☆13Updated last year
- ☆26Updated 9 months ago
- This repository contains the data and code introduced in the paper "CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Maske…☆120Updated last year
- The information of NLP PhD application in the world.☆37Updated 9 months ago
- Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"☆47Updated last year
- EMNLP'2024: Knowledge Verification to Nip Hallucination in the Bud☆22Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆59Updated last year
- Code and data for the paper "Can Large Language Models Understand Real-World Complex Instructions?"(AAAI2024)☆48Updated last year
- ☆18Updated last year
- ☆81Updated last year
- [ACL 2024] Unveiling Linguistic Regions in Large Language Models☆31Updated last year
- self-adaptive in-context learning☆45Updated 2 years ago
- Repository for the Bias Benchmark for QA dataset.☆118Updated last year
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆147Updated last year
- Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"☆56Updated 2 years ago
- The source code of paper "CHEF: A Pilot Chinese Dataset for Evidence-Based Fact-Checking"☆74Updated 2 years ago
- Controlled Text Generation using Prefix-Tuning on GPT☆18Updated 2 years ago
- Public code repo for COLING 2025 paper "Aligning LLMs with Individual Preferences via Interaction"☆29Updated 2 months ago
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆92Updated last month