CUHK-ARISE / LLMPersonalityLinks
Code and Results of the Paper: On the Reliability of Psychological Scales on Large Language Models
☆30Updated last year
Alternatives and similar repositories for LLMPersonality
Users that are interested in LLMPersonality are comparing it to the libraries listed below
Sorting:
- Recent papers on (1) Psychology of LLMs; (2) Biases in LLMs.☆50Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Updated last year
- Dialogue Planning via Brownian Bridge Stochastic Process for Goal-directed Proactive Dialogue (ACL Findings 2023)☆22Updated last year
- Target-oriented Proactive Dialogue Systems with Personalization: Problem Formulation and Dataset Curation (EMNLP 2023)☆30Updated last week
- Codes for Mitigating Unhelpfulness in Emotional Support Conversations with Multifaceted AI Feedback (ACL 2024 Findings)☆16Updated last year
- ☆47Updated last year
- PyTorch implementation of CARE☆16Updated 2 years ago
- ☆75Updated last year
- ☆12Updated last year
- ☆42Updated last year
- Personality Alignment of Language Models☆47Updated 3 months ago
- Benchmarking LLMs' Emotional Alignment with Humans☆115Updated 8 months ago
- self-adaptive in-context learning☆45Updated 2 years ago
- BeHonest: Benchmarking Honesty in Large Language Models☆34Updated last year
- ☆27Updated 2 years ago
- Code and data for "Medical Dialogue Generation via Dual Flow Modeling" (ACL 2023 Findings)☆12Updated last year
- Resources for our ACL 2023 paper: Distilling Script Knowledge from Large Language Models for Constrained Language Planning☆36Updated 2 years ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆117Updated last year
- ☆25Updated 2 years ago
- ☆10Updated 8 months ago
- ☆64Updated 2 years ago
- ☆38Updated last year
- ☆40Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆68Updated 2 years ago
- Code and data for the FACTOR paper☆52Updated last year
- Source code for Truth-Aware Context Selection: Mitigating the Hallucinations of Large Language Models Being Misled by Untruthful Contexts☆17Updated last year
- ☆88Updated 2 years ago
- [EMNLP 2023] ALCUNA: Large Language Models Meet New Knowledge☆28Updated last year
- ☆29Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆59Updated last year