cooperleong00 / ToxificationReversal
Code for the paper "Self-Detoxifying Language Models via Toxification Reversal" (EMNLP 2023)
☆15Updated last year
Alternatives and similar repositories for ToxificationReversal:
Users that are interested in ToxificationReversal are comparing it to the libraries listed below
- Codes for Mitigating Unhelpfulness in Emotional Support Conversations with Multifaceted AI Feedback (ACL 2024 Findings)☆14Updated 7 months ago
- Code and data for "Instruct Once, Chat Consistently in Multiple Rounds: An Efficient Tuning Framework for Dialogue" (ACL 2024)☆22Updated 6 months ago
- Code and data for "Target-constrained Bidirectional Planning for Generation of Target-oriented Proactive Dialogue" (ACM TOIS)☆10Updated 4 months ago
- Code and data for "Dialogue Planning via Brownian Bridge Stochastic Process for Goal-directed Proactive Dialogue" (ACL Findings 2023).☆22Updated last year
- Code and data for "Target-oriented Proactive Dialogue Systems with Personalization: Problem Formulation and Dataset Curation" (EMNLP 2023…☆30Updated 9 months ago
- Official Implementation for the paper "Integrative Decoding: Improving Factuality via Implicit Self-consistency"☆20Updated 4 months ago
- ☆30Updated 9 months ago
- ☆30Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆63Updated 11 months ago
- BeHonest: Benchmarking Honesty in Large Language Models☆31Updated 6 months ago
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆55Updated 7 months ago
- A Survey on the Honesty of Large Language Models☆53Updated 2 months ago
- ☆72Updated 9 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆50Updated 10 months ago
- ☆25Updated last year
- PyTorch implementation of experiments in the paper Aligning Language Models with Human Preferences via a Bayesian Approach☆31Updated last year
- Source code for Truth-Aware Context Selection: Mitigating the Hallucinations of Large Language Models Being Misled by Untruthful Contexts☆17Updated 5 months ago
- ☆14Updated 3 months ago
- ☆13Updated 7 months ago
- [ACL 2024 Findings] CriticBench: Benchmarking LLMs for Critique-Correct Reasoning☆22Updated 11 months ago
- The repository of the project "Fine-tuning Large Language Models with Sequential Instructions", code base comes from open-instruct and LA…☆29Updated 2 months ago
- GPT as Human☆18Updated 2 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆68Updated last month
- Official code for ICML 2024 paper on Persona In-Context Learning (PICLe)☆23Updated 7 months ago
- Code and Results of the Paper Titled: Revisiting the Reliability of Psychological Scales on Large Language Models☆30Updated 4 months ago
- Analyzing LLM Alignment via Token distribution shift☆15Updated last year
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆65Updated 10 months ago
- Recent papers on (1) Psychology of LLMs; (2) Biases in LLMs.☆46Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆57Updated last year